WebGPU

W3C 候補勧告草案,

この文書の詳細情報
このバージョン:
https://www.w3.org/TR/2025/CRD-webgpu-20250820/
最新公開バージョン:
https://www.w3.org/TR/webgpu/
編集者ドラフト:
https://gpuweb.github.io/gpuweb/
以前のバージョン:
履歴:
https://www.w3.org/standards/history/webgpu/
フィードバック:
public-gpu@w3.org 件名 “[webgpu] … メッセージのトピック …” (アーカイブ)
GitHub
編集者:
(Google)
(Google)
(Mozilla)
元編集者:
(Apple Inc.)
(Mozilla)
(Apple Inc.)
参加方法:
課題を提出 (公開中の課題)
テストスイート:
WebGPU CTS

概要

WebGPUは、グラフィックス処理ユニット上でレンダリングや計算などの操作を行うためのAPIを公開します。

この文書のステータス

このセクションは、公開時点でのこの文書のステータスについて説明します。現行のW3C公開文書および本技術レポートの最新改訂は、W3C標準・草案一覧でご確認いただけます。

この仕様へのフィードバックやコメントは歓迎します。 この仕様に関する議論にはGitHub Issuesが推奨されます。あるいは、GPU for the Web ワーキンググループのメーリングリストpublic-gpu@w3.orgアーカイブ)へコメントを送信することもできます。 この草案では、作業グループで今後議論予定の未解決課題が一部ハイライトされています。 これらの課題の妥当性も含め、結果についてはまだ決定されていません。

この文書はGPU for the Web ワーキンググループにより、勧告トラックを用いて候補勧告草案として公開されました。この文書は少なくともまでは候補勧告のままとなります。

グループは、すべての機能について、最新のGPUシステムAPI上で2つ以上のブラウザによる実装例を示す予定です。テストスイートは実装レポートの作成に用いられます。

候補勧告として公開されたことは、W3Cおよびそのメンバーによる支持を意味するものではありません。候補勧告草案には、作業グループが次回の候補勧告スナップショットに含める予定の、前回候補勧告からの変更点が統合されています。

本文書は随時維持・更新されます。一部内容は作業途中です。

この文書はW3C特許ポリシーの下で活動するグループによって作成されました。W3Cは、グループの成果物に関する公開特許情報リストを管理しています。このページには特許の開示方法も記載されています。ある人物が、必須クレームを含むと信じる特許を実際に知っている場合、その情報をW3C特許ポリシー第6節に従い開示する必要があります。

この文書は2025年8月18日W3Cプロセス文書に準拠します。

1. はじめに

このセクションは規定ではありません。

グラフィックス処理ユニット(GPU)は、パーソナルコンピュータにおいて豊かなレンダリングや計算アプリケーションを可能にする重要な役割を担っています。 WebGPUは、Web上でGPUハードウェアの機能を公開するAPIです。 このAPIは、(2014年以降の)ネイティブGPU APIに効率良くマッピングできるよう、ゼロから設計されています。 WebGPUはWebGLとは関係なく、OpenGL ESを明示的にターゲットとしていません。

WebGPUは物理的なGPUハードウェアをGPUAdapterとして扱います。 アダプターへの接続は GPUDeviceを介し、リソースの管理やデバイスのGPUQueueによるコマンド実行を行います。 GPUDeviceは、処理ユニットへ高速アクセス可能な独自メモリを持つ場合があります。 GPUBufferGPUTextureは、GPUメモリに裏付けられた物理リソースです。 GPUCommandBufferGPURenderBundleは、ユーザーが記録したコマンドのコンテナです。 GPUShaderModuleシェーダーコードを格納します。他のリソース、例えばGPUSamplerGPUBindGroupは、GPUが 物理リソースを利用する方法を構成します。

GPUはGPUCommandBufferでエンコードされたコマンドを実行し、データをパイプライン(固定機能とプログラム可能ステージの混在)へ流します。プログラム可能ステージは シェーダー(GPU上で動作する専用プログラム)を実行します。 パイプラインのほとんどの状態は GPURenderPipelineGPUComputePipelineオブジェクトで定義されます。それ以外の状態は、コマンド beginRenderPass()setBlendConstant()などでエンコード時に設定されます。

2. 悪意ある利用への考慮事項

このセクションは規定ではありません。 このAPIをWebで公開することによるリスクについて説明します。

2.1. セキュリティの考慮事項

WebGPUのセキュリティ要件はWebの従来通りであり、変更できません。 一般的なアプローチは、すべてのコマンドをGPUに到達する前に厳格に検証し、ページが自身のデータだけ操作できるようにすることです。

2.1.1. CPUベースの未定義動作

WebGPU実装は、ユーザーによるワークロードをターゲットプラットフォーム固有のAPIコマンドに変換します。ネイティブAPIはコマンドの正しい使用方法を規定しており(例:vkCreateDescriptorSetLayout)、有効な利用規則を守らない場合の結果は保証されません。 これは「未定義動作」と呼ばれ、攻撃者が自身の所有しないメモリにアクセスしたり、ドライバに任意のコードを実行させたりすることに悪用される可能性があります。

安全でない利用を禁止するため、WebGPUの許容動作範囲は全ての入力に対して定義されています。 実装はユーザーからの全ての入力を検証し、有効なワークロードのみドライバへ到達させる必要があります。本書では全てのエラー条件とその取り扱いについて規定しています。 例えば、copyBufferToBuffer()の「source」と「destination」の両方で、交差する範囲の同一バッファを指定すると、 GPUCommandEncoderはエラーを生成し、他の操作は行われません。

エラー処理の詳細は§ 22 エラーとデバッグを参照してください。

2.1.2. GPUベースの未定義動作

WebGPUのシェーダーはGPUハードウェア内部の計算ユニットで実行されます。ネイティブAPIでは、一部のシェーダー命令がGPU上で未定義動作となる場合があります。 これに対応するため、WebGPUではシェーダー命令セットとその動作を厳密に定義しています。シェーダーがcreateShaderModule()に渡される際、 WebGPU実装はプラットフォーム固有シェーダーへの変換や最適化を行う前に、必ず検証を行います。

2.1.3. 未初期化データ

一般に、新しいメモリの割り当ては、システム上で他のアプリケーションが残したデータが露出する可能性があります。 これに対処するため、WebGPUは概念的にすべてのリソースをゼロ初期化しますが、実際には開発者が手動で内容を初期化する場合はこの手順を省略することもあります。 シェーダー内の変数や共有ワークグループメモリもこれに含まれます。

ワークグループメモリのクリア方法はプラットフォームにより異なります。 ネイティブAPIがクリア機能を提供しない場合、WebGPU実装は計算シェーダー内で全呼び出しを使ってクリアを行い、同期後に開発者のコード実行を続行します。

注意:
キュー操作で利用されるリソースの初期化状態は、コマンドバッファへのエンコード時ではなく、操作がキューに登録された時点でのみ把握できます。そのため、 一部実装では、最適化されていない遅延クリア(例: テクスチャのクリア、もしくはGPULoadOp "load""clear"へ変更)が必要になる場合があります。

そのため、すべての実装は、このパフォーマンス低下の可能性について開発者コンソールで警告を推奨すべきですが、実際に低下がなくても警告を表示すべきです。

2.1.4. シェーダー内の範囲外アクセス

シェーダー物理リソースへ直接(例:"uniform" GPUBufferBinding)、またはテクスチャユニット(座標変換を扱う固定機能ハードウェアブロック)経由でアクセスできます。 WebGPU APIの検証では、シェーダーへの全入力が提供され、使用法・型が正しいことのみ保証できます。 テクスチャユニットが関与しない場合、データへのアクセスが範囲内であることはAPIレベルでは保証できません。

シェーダーがアプリケーション所有外のGPUメモリへアクセスするのを防ぐため、WebGPU実装はドライバの「堅牢なバッファアクセス」モードを有効化し、アクセスをバッファ範囲内に制限する場合があります。

または、手動で範囲外チェックを挿入するようにシェーダーコードを変換することもできます。この場合、範囲外チェックは配列インデックスへのアクセスにのみ適用されます。構造体の単純なフィールドアクセスについては、ホスト側のminBindingSize検証により不要です。

シェーダーが物理リソース範囲外のデータを読み込もうとした場合、実装は以下のいずれかを許容します:

  1. リソース範囲内の他の場所の値を返す

  2. 値ベクトル "(0, 0, 0, X)"(Xは任意)を返す

  3. 描画またはディスパッチ呼び出しを部分的に破棄する

シェーダーが物理リソース範囲外へデータを書き込もうとした場合、実装は以下のいずれかを許容します:

  1. リソース範囲内の他の場所へ値を書き込む

  2. 書き込み操作を破棄する

  3. 描画またはディスパッチ呼び出しを部分的に破棄する

2.1.5. 無効なデータ

CPUからGPUへ浮動小数点データをアップロードする場合や、GPU上で生成する場合、無限大やNaN(非数)など、正しい数値に対応しない2進表現になる場合があります。このときのGPUの動作は、IEEE-754標準に準拠したGPUハードウェア実装の精度に依存します。 WebGPUは、無効な浮動小数点数値の導入が算術計算結果のみに影響し、それ以外の副作用は生じないことを保証します。

2.1.6. ドライバのバグ

GPUドライバも他のソフトウェア同様、バグの影響を受けます。バグが発生した場合、攻撃者がドライバの誤動作を利用して特権外のデータへアクセスする可能性があります。 このリスク低減のため、WebGPUワーキンググループは、WebGL同様にWebGPU適合テストスイート(CTS)をGPUベンダーのドライバテスト工程に統合するよう協力します。 WebGPU実装では、既知のバグへの対応策を講じ、回避困難なバグがあるドライバではWebGPUの利用を無効化することが期待されます。

2.1.7. タイミング攻撃

2.1.7.1. コンテンツ・タイムラインのタイミング

WebGPUはJavaScriptに新しい状態(コンテンツタイムライン)を公開しません。これは、エージェントエージェントクラスタ内で共有するものです。 コンテンツタイムラインの状態(例:[[mapping]])は、 明示的なコンテンツタイムラインタスク(通常のJavaScript同様)時のみ変更されます。

2.1.7.2. デバイス/キュー・タイムラインのタイミング

書き込み可能なストレージバッファや他の呼び出し間通信は、キュータイムライン上で高精度タイマー構築に利用される場合があります。

オプション機能"timestamp-query"もGPU操作の高精度タイミングを提供します。セキュリティ・プライバシー対策として、タイミングクエリ値は低精度に揃えられます:詳細はcurrent queue timestampを参照。特に以下の点に注意してください:

2.1.8. Row hammer攻撃

Row hammerはDRAMセルの状態漏洩を利用する攻撃手法です。GPUでも利用される可能性があります。 WebGPUは特別な対策を持たず、プラットフォームレベルの対策(メモリリフレッシュ間隔短縮など)に依存します。

2.1.9. サービス妨害(DoS)

WebGPUアプリケーションはGPUメモリや計算ユニットへアクセス可能です。WebGPU実装は、他のアプリケーションの応答性維持のため、利用可能なGPUメモリ量を制限する場合があります。 GPU処理時間については、アプリケーションが数秒以上GPUの応答を止めないよう「ウォッチドッグ」タイマーを設けることもできます。 これらの対策はWebGLでも用いられています。

2.1.10. ワークロード識別

WebGPUは同一マシン上で動作する他プログラム(Webページ)と共有される制約付きグローバルリソースへアクセスします。アプリケーションは、これら共有リソースの利用パターンから、他ページで実行中のワークロードを間接的に推測することが可能です。 これらの問題は、JavascriptでのシステムメモリやCPU実行スループットに関する問題と同様です。WebGPUは追加の対策を提供しません。

2.1.11. メモリリソース

WebGPUは、VRAMなどのマシングローバルメモリヒープからの失敗可能な割り当てを公開します。 これにより、あるヒープ種別の残りメモリ量を、割り当てを試みて失敗を監視することで推測できます。

GPUは内部的に1つ以上(通常は2つのみ)のメモリヒープを、全アプリケーションで共有しています。ヒープが枯渇するとWebGPUはリソース作成に失敗します。 これは観測可能であり、悪意あるアプリケーションが他アプリケーションのヒープ利用状況や割り当て量を推測できる場合があります。

2.1.12. 計算リソース

他サイトが同時にWebGPUを利用すると、処理完了までの時間増加を観測できる場合があります。例えば、サイトが継続的に計算ワークロードをキューへ送り、完了を監視することで、他の何かがGPU利用を開始したことを推測できます。

GPUには演算ユニット、テクスチャサンプリングユニット、アトミックユニット等、個別にテスト可能な多数の部品があります。悪意あるアプリケーションは、これらユニットの負荷状況を感知し、他アプリケーションのワークロードを推測しようとする場合があります。これはJavascriptのCPU実行状況と同様の現実です。

2.1.13. 機能の濫用

悪意あるサイトは、WebGPUが公開する機能を悪用し、ユーザーや体験の利益にならない計算(隠れた暗号通貨マイニング、パスワード解析、レインボーテーブル計算など)を実行する可能性があります。

API利用のこうした用途を防ぐことはできません。ブラウザーが正当なワークロードと悪用ワークロードを区別できないためです。これはWeb上の汎用計算機能(JavaScript、WebAssembly、WebGL)全般に共通する問題で、WebGPUは一部ワークロードの実装・実行を容易または効率化するだけです。

この種の濫用軽減策として、ブラウザーはバックグラウンドタブの操作をスロットリングしたり、リソース大量利用中のタブを警告したり、WebGPU利用可能なコンテキストを制限できます。

ユーザーエージェントは、特に悪意ある利用による高い電力消費に対し、ユーザーへの警告を経験的に発することができます。 そのような警告を実装する場合、JavaScript、WebAssembly、WebGLなどと同様にWebGPUも判断基準に含めるべきです。

2.2. プライバシーの考慮事項

ここにはトラッキングベクトルがあります。 WebGPUのプライバシー考慮事項はWebGLと似ています。GPU APIは複雑であり、開発者が効果的に利用するために、デバイスの機能の様々な側面を必要に応じて公開する必要があります。一般的な対策としては、識別につながる情報を正規化またはビニングし、可能な限り挙動を統一することが含まれます。

ユーザーエージェントは、32個を超える識別可能な構成やバケットを公開してはなりません。

2.2.1. 機器固有の機能と制限

WebGPUは、基盤となるGPUアーキテクチャやデバイス形状に関する多くの詳細を公開できます。 これには利用可能な物理アダプター、GPUやCPUリソースの多数の制限(最大テクスチャサイズなど)、および利用可能なオプションのハードウェア固有機能が含まれます。

ユーザーエージェントは、実際のハードウェア制限を公開する義務はなく、機器固有情報の公開度合いを完全に制御できます。フィンガープリント防止の一手として、すべてのターゲットプラットフォームを少数のビンにまとめる手法があります。全体として、ハードウェア制限の公開によるプライバシーへの影響はWebGLと同等です。

デフォルトの制限値も、ほとんどのアプリケーションがより高い制限を要求せずとも動作できるよう、意図的に十分高く設定されています。 APIの利用は要求された制限値に従い検証されるため、実際のハードウェア機能が偶然ユーザーに露出することはありません。

2.2.2. 機器固有のアーティファクト

WebGLと同様に、機器固有のラスタライズ/精度アーティファクトやパフォーマンス差が観測される場合があります。これにはラスタライズ範囲やパターン、シェーダーステージ間の補間精度、計算ユニットのスケジューリング、その他実行に関する要素が含まれます。

一般に、ラスタライズや精度のフィンガープリントはベンダーごとのほぼ全デバイスで一致します。パフォーマンス差は比較的扱い難いですが、信号としても低い傾向(JS実行性能と同様)です。

プライバシー重視のアプリケーションやユーザーエージェントは、こうしたアーティファクトを除去するためにソフトウェア実装を利用すべきです。

2.2.3. 機器固有のパフォーマンス

ユーザーを識別するもう一つの要素は、GPU上の特定操作の性能測定です。低精度タイミングでも、操作の繰り返し実行により、ユーザーのマシンが特定ワークロードに強いかどうかが判明します。 これはWebGLやJavascriptにも存在する一般的なベクトルですが、信号としては低く、完全な正規化は困難です。

WebGPUの計算パイプラインは、固定機能ハードウェアに妨げられないGPUアクセスを公開します。これによりユニークなデバイスフィンガープリントのリスクが高まります。ユーザーエージェントは論理的なGPU呼び出しと実際の計算ユニットを分離する等の対策でリスク低減が可能です。

2.2.4. ユーザーエージェントの状態

本仕様は、オリジンごとの追加ユーザーエージェント状態を定義していません。 ただし、ユーザーエージェントは高負荷なコンパイル結果(GPUShaderModuleGPURenderPipelineGPUComputePipeline等)のコンパイルキャッシュを持つことが期待されます。 これらのキャッシュはWebGPUアプリケーションの初回訪問後の読み込み時間短縮に重要です。

仕様上は、これらのキャッシュは非常に高速なコンパイルと区別できませんが、アプリケーション側ではcreateComputePipelineAsync()の解決にかかる時間を容易に測定でき、オリジン間で情報漏洩する可能性があります(例:「ユーザーがこの特定のシェーダーでサイトへアクセスしたか」)。そのためユーザーエージェントはストレージ分割のベストプラクティスに従うべきです。

システムのGPUドライバも独自のシェーダーやパイプラインのコンパイルキャッシュを持つ場合があります。ユーザーエージェントは可能な限りこれらを無効化するか、パーティションごとのデータをシェーダーへ加えて、GPUドライバが別物とみなすようにすることもできます。

2.2.5. ドライバのバグ

セキュリティの考慮事項で述べた懸念に加え、ドライバのバグはユーザーの識別手段となる挙動差を生じる場合があります。セキュリティの考慮事項に記載の対策(GPUベンダーとの協調、既知問題へのワークアラウンド実装等)もここで適用されます。

2.2.6. アダプタ識別子

WebGLの過去の経験から、開発者がGPUの種類を特定可能であることが、堅牢なGPUベースコンテンツの作成・保守に正当な必要があることが示されています。例として、既知のドライババグがあるアダプタを特定して回避したり、特定ハードウェアで性能が期待通りでない機能を避けたりする場合などです。

しかしアダプタ識別子の公開はフィンガープリント情報の増加につながるため、識別精度の制限が望まれます。

堅牢なコンテンツとプライバシー保護のバランスを取るため、いくつかの対策が可能です。まず、ユーザーエージェントが既知のドライバ問題を特定し回避することで、開発者の負担を軽減できます(これはブラウザがGPU利用を始めて以来行われています)。

アダプタ識別子をデフォルトで公開する場合、可能な限り幅広く(ベンダーや一般的なアーキテクチャのみ)しつつ有用性を保つべきです。場合によっては、実際のアダプタの合理的な代理となる識別子を報告する場合もあります。

バグ報告など、アダプタの詳細情報が有用な場合は、ユーザーの同意を得て追加情報をページに公開することが可能です。

最後に、ユーザーエージェントは、強化プライバシーモードなど適切と判断した場合、アダプタ識別子を一切報告しない裁量を常に持ちます。

3. 基本事項

3.1. 規約

3.1.1. 構文上の省略形

本仕様では、以下の構文上の省略形を使用します:

.(ドット)構文。プログラミング言語で一般的です。

Foo.Bar」は「値(またはインターフェース)FooBarメンバー」を意味します。 Foo順序付きマップであり、BarFoo存在しない場合はundefinedを返します。

Foo.Bar提供されている」は「値FooマップBarメンバーが存在する」ことを意味します。

?.(オプショナルチェーン)構文。JavaScript由来です。

Foo?.Bar」は「Foonullまたはundefined、またはBarFoo存在しない場合はundefined、それ以外はFoo.Bar」を意味します。

例として、bufferGPUBufferの場合、 buffer?.\[[device]].\[[adapter]]は 「buffernullまたはundefinedならundefined、 それ以外はbuffer\[[device]]内部スロットの\[[adapter]]内部スロット」を指します。

??(ヌリッシュ合体)構文。JavaScript由来です。

x ?? y」は「xがnullまたはundefinedでないならx、そうでなければy」です。

スロットバック属性

同名の内部スロットで裏付けられるWebIDL属性です。可変の場合と不可変の場合があります。

3.1.2. WebGPUオブジェクト

WebGPUオブジェクトは、WebGPUインターフェース内部オブジェクトから構成されます。

WebGPUインターフェースは、WebGPUオブジェクトの公開インターフェースと状態を定義します。 作成されたコンテンツタイムライン上で利用でき、JavaScript公開WebIDLインターフェースです。

GPUObjectBaseを含むインターフェースはすべてWebGPUインターフェースです。

内部オブジェクトは、WebGPUオブジェクトの状態をデバイスタイムライン上で追跡します。 内部オブジェクトの可変状態の読み書きは、単一の順序付けられたデバイスタイムライン上でのみ実行されます。

以下の特別なプロパティ型がWebGPUオブジェクトに定義できます:

不変プロパティ

オブジェクト初期化時に設定される読み取り専用スロット。任意のタイムラインからアクセスできます。

注意: このスロットは不変なので、必要に応じて複数のタイムラインでコピーを持つことができます。 不変プロパティは、本仕様で複数コピーの記述を避けるためこう定義されています。

[[角括弧付き]]の場合は内部スロット。
角括弧なしの場合はスロットバック属性です。

コンテンツタイムラインプロパティ

オブジェクト作成時のコンテンツタイムラインでのみアクセス可能なプロパティ。

[[角括弧付き]]の場合は内部スロット。
角括弧なしの場合はスロットバック属性です。

デバイスタイムラインプロパティ

内部オブジェクトの状態を追跡し、作成されたデバイスタイムラインでのみアクセス可能なプロパティ。デバイスタイムラインプロパティは可変です。

デバイスタイムラインプロパティ[[角括弧付き]]で内部スロットです。

キュータイムラインプロパティ

内部オブジェクトの状態を追跡し、作成されたキュータイムラインでのみアクセス可能なプロパティ。キュータイムラインプロパティは可変です。

キュータイムラインプロパティ[[角括弧付き]]で内部スロットです。

interface mixin GPUObjectBase {
    attribute USVString label;
};
新しいWebGPUオブジェクトを作成する(GPUObjectBase parent, interface T, GPUObjectDescriptorBase descriptor) (TGPUObjectBaseを拡張する) 場合、次のコンテンツタイムライン手順を実行する:
  1. deviceparent.[[device]]とする。

  2. objectTの新しいインスタンスとする。

  3. object.[[device]]deviceを設定する。

  4. object.labeldescriptor.labelを設定する。

  5. objectを返す。

GPUObjectBase には以下の不変プロパティがあります:

[[device]]device(readonly)

内部オブジェクトの所有するデバイスです。

このオブジェクトの内容への操作はassertデバイスタイムライン上で動作し、デバイスが有効であることを検証します。

GPUObjectBase には以下のコンテンツタイムラインプロパティがあります:

labelUSVString

開発者が指定するラベル。実装定義の方法で利用されます。 ブラウザ、OS、その他ツールが、基盤となる内部オブジェクトを開発者へ識別するために使用可能です。 例:GPUError メッセージ、コンソール警告、ブラウザデベロッパーツール、プラットフォームデバッグユーティリティなどで表示されます。

注意:
実装はラベルを使ってWebGPUオブジェクトの識別を強化したエラーメッセージを推奨します。

ただし、これは唯一の識別方法である必要はありません。 実装は他の利用可能な情報も活用 すべき です。ラベルがない場合など、例えば:

注意:
labelGPUObjectBaseのプロパティです。 2つのGPUObjectBaseラッパーオブジェクトは、同じ基盤オブジェクトを参照していても、ラベル状態は完全に分離しています (例:getBindGroupLayout()で返された場合)。 labelプロパティは、JavaScriptから設定された場合のみ変更されます。

つまり、1つの基盤オブジェクトが複数ラベルと関連付けられる場合があります。 本仕様ではラベルがデバイスタイムラインに伝搬する方法は定義しません。 ラベルの利用方法は完全に実装定義です。エラーメッセージで最新ラベル、全ラベル、あるいはラベルなしを表示する場合があります。

一部ユーザーエージェントが基盤ネイティブAPIのデバッグ機能にラベルを渡す場合があるため、型はUSVStringです。

GPUObjectBase には以下のデバイスタイムラインプロパティがあります:

[[valid]]boolean (初期値true

trueの場合、内部オブジェクトが有効であることを示します。

注意:
理想的にはWebGPUインターフェースは、その親オブジェクト(例:所有する[[device]])のガベージコレクションを妨げるべきではありません。ただし、一部実装で親オブジェクトへの強参照が必要な場合は保証できません。

そのため、開発者はWebGPUインターフェースが、そのインターフェースのすべての子オブジェクトがガベージコレクトされるまで、ガベージコレクトされない可能性があると想定すべきです。これにより、一部リソースが予想より長期間割り当てられる場合があります。

割り当てリソースの予測可能な解放が必要な場合は、destroyメソッド(例:GPUDevice.destroy()GPUBuffer.destroy())の呼び出しを推奨します。ガベージコレクションへの依存は避けてください。

3.1.3. オブジェクト記述子

オブジェクト記述子は、オブジェクトの作成に必要な情報を保持します。 通常、create*メソッド(GPUDeviceのメソッド)を使って作成されます。

dictionary GPUObjectDescriptorBase {
    USVString label = "";
};

GPUObjectDescriptorBase のメンバーは以下の通りです:

labelUSVString、デフォルト値 ""

GPUObjectBase.labelの初期値です。

3.2. 非同期性

3.2.1. 無効な内部オブジェクトと伝播する無効性

WebGPUのオブジェクト生成操作はPromiseを返しませんが、内部的には非同期処理です。返されるオブジェクトは内部オブジェクトを参照し、デバイスタイムライン上で操作されます。 例外やリジェクトで失敗するのではなく、多くのエラーは関連するデバイスタイムラインGPUErrorを生成し、デバイスに通知します。

内部オブジェクト有効無効のいずれかです。 無効オブジェクトは後で有効になることはありませんが、 有効オブジェクトが無効化される場合もあります。

作成時にオブジェクトが無効になる場合があります。例えばオブジェクト記述子が有効なオブジェクトを表していない場合や、リソース割り当てに十分なメモリがない場合です。 また、他の無効なオブジェクトから生成した場合(例:無効なGPUTextureに対してcreateView()を呼ぶ場合)も発生します。 このケースは伝播する無効性と呼ばれます。

内部オブジェクトほとんどの型で作成後に無効になりませんが、使用不能になる場合があります(例:所有デバイスが失われたdestroyedされた、バッファ状態が「destroyed」など)。

一部の型では、作成後に無効になる場合があります。特に、デバイスアダプターGPUCommandBuffer、 コマンド/パス/バンドルエンコーダです。

あるGPUObjectBase object有効であるとは、 object.[[valid]]trueであることです。
あるGPUObjectBase object無効であるとは、 object.[[valid]]falseであることです。
あるGPUObjectBase objecttargetObject併用可能(valid to use with)であるとは、以下のデバイスタイムライン条件をすべて満たす場合です:
GPUObjectBase object無効化するには、以下のデバイスタイムライン手順を実行します:
  1. object.[[valid]]falseに設定する。

3.2.2. Promiseの順序付け

WebGPUのいくつかの操作はPromiseを返します。

WebGPUは、これらのPromiseの解決(resolveまたはreject)順序について、以下を除き保証しません:

アプリケーションは他のPromiseの解決順序に依存してはなりません。

3.3. 座標系

レンダリング操作では、以下の座標系を使用します:

注意: WebGPUの座標系はDirectXのグラフィックスパイプラインの座標系に一致します。

3.4. プログラミングモデル

3.4.1. タイムライン

WebGPUの挙動は「タイムライン」で記述されます。 各操作(アルゴリズムとして定義)は、タイムライン上で実行されます。 タイムラインは、操作の順序と、どの状態がどの操作から参照できるかを明確に定義します。

注意: この「タイムライン」モデルは、ブラウザエンジンのマルチプロセスモデル(通常「コンテンツプロセス」と「GPUプロセス」)や、 多くの実装でGPU自体が独立した実行ユニットであることに由来する制約を記述します。 WebGPUの実装は、タイムラインで並列実行する必要はないため、複数プロセスやスレッドは必須ではありません。 (ただし、get a copy of the image contents of a contextのように、他タイムラインの完了を同期的に待つ場合は並行処理が必要です。)

コンテンツタイムライン

Webスクリプトの実行に関連付けられます。 本仕様で記載されているすべてのメソッド呼び出しを含みます。

あるGPUDevice deviceの操作でコンテンツタイムラインへ手順を発行するには、 queue a global task for GPUDevice deviceでその手順を発行します。

デバイスタイムライン

ユーザーエージェントが発行するGPUデバイス操作に関連付けられます。 アダプター、デバイス、GPUリソースや状態オブジェクトの作成を含みます。これらは通常、GPUを制御するユーザーエージェント側から見ると同期的ですが、別プロセスで実行されることもあります。

キュータイムライン

GPUの計算ユニット上での操作の実行に関連付けられます。実際の描画、コピー、計算ジョブなどGPU上で実行される処理を含みます。

タイムライン非依存

上記いずれかのタイムラインに関連します。

不変プロパティimmutable propertiesや 呼び出し元から渡された引数のみを操作する場合、どのタイムラインにも手順を発行できます。

以下は各タイムラインに関連する手順や値のスタイリング例です。 このスタイリングは規定ではありません。仕様本文では常に関連付けを記述します。
不変値例用語 定義

すべてのタイムラインで利用可能です。

コンテンツタイムライン例用語 定義

コンテンツタイムラインのみで利用可能です。

デバイスタイムライン例用語 定義

デバイスタイムラインのみで利用可能です。

キュータイムライン例用語 定義

キュータイムラインのみで利用可能です。

タイムライン非依存な手順はこのような見た目です。

不変値例用語の利用例。

コンテンツタイムラインで実行される手順はこのような見た目です。

不変値例用語の利用例。 コンテンツタイムライン例用語の利用例。

デバイスタイムラインで実行される手順はこのような見た目です。

不変値例用語の利用例。 デバイスタイムライン例用語の利用例。

キュータイムラインで実行される手順はこのような見た目です。

不変値例用語の利用例。 キュータイムライン例用語の利用例。

本仕様では、非同期操作は戻り値がコンテンツタイムライン以外のタイムラインで行われる処理に依存する場合に使われます。 APIではPromiseやイベントで表現されます。

GPUComputePassEncoder.dispatchWorkgroups():
  1. ユーザーはdispatchWorkgroupsコマンドをGPUComputePassEncoderのメソッドで呼び出し、コンテンツタイムライン上でエンコードされます。

  2. ユーザーはGPUQueue.submit() を呼び、 GPUCommandBuffer をユーザーエージェントに渡します。これはOSドライバによる低レベルのサブミットとしてデバイスタイムライン上で処理されます。

  3. サブミットはGPUの呼び出しスケジューラによって実際の計算ユニットへ割り当てられ、キュータイムライン上で実行されます。

GPUDevice.createBuffer():
  1. ユーザーはGPUBufferDescriptor を記入し、 GPUBuffer を作成します。 これはコンテンツタイムライン上で行われます。

  2. ユーザーエージェントはデバイスタイムライン上で低レベルのバッファを作成します。

GPUBuffer.mapAsync():
  1. ユーザーはGPUBuffer のマップをコンテンツタイムライン上でリクエストし、Promiseが返されます。

  2. ユーザーエージェントはバッファがGPUで現在使用中かどうかを確認し、使用終了後に再確認するリマインダーを設定します。

  3. GPUがキュータイムライン上でバッファの使用を終えた後、ユーザーエージェントがメモリへのマッピングを行い、Promiseをresolveします。

3.4.2. メモリモデル

このセクションは規定ではありません。

アプリケーション初期化時にGPUDeviceを取得したら、 WebGPUプラットフォームは以下のレイヤーで構成されると記述できます:

  1. 本仕様を実装するユーザーエージェント。

  2. このデバイス用の低レベルネイティブAPIドライバを持つオペレーティングシステム。

  3. 実際のCPUおよびGPUハードウェア。

WebGPUプラットフォームの各レイヤーは、 ユーザーエージェントが仕様実装時に考慮すべき異なるメモリ型を持つ場合があります:

ほとんどの物理リソースは、 GPUによる計算やレンダリングに効率的なメモリ型で割り当てられます。 ユーザーがGPUに新しいデータを提供する必要がある場合、データがプロセス境界を越えてGPUドライバと通信するユーザーエージェント部分へ届き、 さらにドライバに見えるようにする必要があります(これはドライバ割り当てのステージングメモリへのコピーを伴う場合もあります)。 最後に、専用GPUメモリへ転送され、内部レイアウトがGPU操作に最適な形へ変換されることもあります。

これらすべての遷移は、ユーザーエージェントによるWebGPU実装で処理されます。

注意: この例は最悪ケースを記述していますが、実際の実装ではプロセス境界を越えない場合や、 ドライバ管理メモリをArrayBufferとして直接公開し、データコピーを回避できる場合もあります。

3.4.3. リソースの使用法

物理リソースは、内部使用法としてGPUコマンドで利用できます。

input

描画やディスパッチ呼び出しの入力データ用バッファ。内容は保持されます。 buffer INDEX、 buffer VERTEX、 buffer INDIRECTで許可されます。

constant

シェーダーから見て定数となるリソースバインディング。内容は保持されます。 buffer UNIFORM または texture TEXTURE_BINDINGで許可されます。

storage

読み書き可能なストレージリソースバインディング。 buffer STORAGE または texture STORAGE_BINDINGで許可されます。

storage-read

読み取り専用ストレージリソースバインディング。内容は保持されます。 buffer STORAGE または texture STORAGE_BINDINGで許可されます。

attachment

レンダーパスで読み書き出力アタッチメントや書き込み専用リゾルブターゲットとして使うテクスチャ。 texture RENDER_ATTACHMENTで許可されます。

attachment-read

レンダーパスで読み取り専用アタッチメントとして使うテクスチャ。内容は保持されます。 texture RENDER_ATTACHMENTで許可されます。

サブリソースは、バッファ全体またはテクスチャのサブリソースです。

一部の内部使用法は他と互換性があります。サブリソースは、複数の使用法を組み合わせた状態になることがあります。リストU互換使用法リストである条件は、次のいずれかです:

使用法が互換使用法リストにだけ組み合わされるよう強制することで、APIはメモリ操作のデータ競合発生タイミングを制限できます。 この性質により、WebGPU向けに書かれたアプリケーションが異なるプラットフォームでも修正なしで動作しやすくなります。

例:
同じバッファをstorageとしてもinputとしても同じGPURenderPassEncoder内でバインドすると、そのバッファは互換使用法リストにはなりません。
例:
これらのルールにより読み取り専用深度ステンシルが可能です。1つの深度/ステンシルテクスチャをレンダーパス内で2種類の読み取り専用使用法として同時利用できます:
例:
使用法範囲ストレージ例外により、通常は許可されない2つのケースが許可されます:
例:
使用法範囲アタッチメント例外により、テクスチャサブリソースを複数回attachmentとして利用可能です。 これは、3Dテクスチャの非重複スライスを1つのレンダーパスで異なるアタッチメントとしてバインドするために必要です。

同じスライスを2つの異なるアタッチメントに重複してバインド不可です。これはbeginRenderPass()で検証されます。

3.4.4. 同期

使用法範囲は、マップであり、サブリソースからlist<内部使用法>>への対応です。 各使用法範囲は、互いに同時実行可能な一連の操作範囲をカバーし、その範囲内ではサブリソースの使用法が一貫した互換使用法リストでなければなりません。

使用法範囲scopeは、使用法範囲の検証を通過します。すべての[subresource, usageList]について、usageList互換使用法リストである場合です。
使用法範囲usageScope追加するには、サブリソースsubresourceと(内部使用法または内部使用法の集合)usageを指定して:
  1. usageScope[subresource]が存在しない場合、[]に設定する。

  2. 追加usageusageScope[subresource]へ追加。

使用法範囲A使用法範囲B統合するには:
  1. 各[subresource, usage]に対し:

    1. 追加subresourceBへ、使用法usageで追加。

使用法範囲はエンコード時に構築・検証されます:

使用法範囲は以下の通り:

注意: コピーコマンドは単独の操作であり、使用法範囲検証には使いません。自己競合防止のため独自検証を行います。

例:
以下のリソース使用法は使用法範囲に含まれます:

3.5. コア内部オブジェクト

3.5.1. アダプター

アダプターは、システム上のWebGPU実装を識別します。 これは、ブラウザの基盤となるプラットフォーム上の計算/レンダリング機能のインスタンス、そしてその機能上に構築されたブラウザのWebGPU実装のインスタンスの両方を指します。

アダプターGPUAdapter で公開されます。

アダプターは基盤実装を一意に表しません。 requestAdapter() を複数回呼ぶと、毎回異なるアダプターオブジェクトが返されます。

アダプターオブジェクトは、1つのデバイスしか生成できません。 requestDevice() に成功すると、アダプターの[[state]]"consumed" に変化します。 さらに、アダプターオブジェクトはいつでも期限切れになる場合があります。

注意: これにより、アプリケーションはデバイス生成時に最新のシステム状態を利用したアダプター選択を行えます。 また、初回初期化、アダプターの抜き差しによる再初期化、テスト用のGPUDevice.destroy() 呼び出しによる再初期化など、様々なシナリオで堅牢性が高まります。

アダプターは、広い互換性・予測可能な挙動・プライバシー向上などを目的に、著しい性能低下を伴う場合、フォールバックアダプターと見なされる場合があります。すべてのシステムでフォールバックアダプターが利用可能である必要はありません。

アダプターには次の不変プロパティがあります:

[[features]]ordered set<GPUFeatureName> (読み取り専用)

このアダプター上でデバイス生成に利用可能な機能

[[limits]]supported limits (読み取り専用)

このアダプター上でデバイス生成に利用可能な最良の制限値。

各アダプター制限値は、supported limits内のデフォルト値と同等またはより良い値でなければなりません。

[[fallback]]boolean (読み取り専用)

trueの場合、このアダプターはフォールバックアダプターです。

[[xrCompatible]] 型 boolean

trueの場合、このアダプターはWebXRセッションとの互換性を持つようにリクエストされたことを示します。

アダプターには次のデバイスタイムラインプロパティがあります:

[[state]] 初期値 "valid"
"valid"

このアダプターはデバイス生成に利用可能です。

"consumed"

このアダプターはすでにデバイス生成に利用されており、再利用できません。

"expired"

このアダプターは他の理由で期限切れになっています。

GPUAdapter adapter期限切れにするには、以下のデバイスタイムライン手順を実行します:
  1. adapter.[[adapter]].[[state]]"expired" を設定する。

3.5.2. デバイス

デバイスは、アダプターの論理的インスタンスであり、 これを通じて内部オブジェクトが生成されます。

デバイスGPUDevice を通じて公開されます。

デバイスは、そこから生成されたすべての内部オブジェクトの排他的な所有者です。 デバイス無効失われたまたは destroyed)になると、 それとその上で生成されたすべてのオブジェクト(直接:createTexture()、間接:createView()など)は、 暗黙的に利用不可となります。

デバイスには以下の不変プロパティがあります:

[[adapter]]アダプター (読み取り専用)

このデバイスが生成されたアダプターです。

[[features]]ordered set<GPUFeatureName> (読み取り専用)

このデバイス上で利用できる機能(生成時に算出)。 基盤アダプターが他の機能をサポートしていても、追加機能は利用できません。

[[limits]]supported limits (読み取り専用)

このデバイスで利用できる制限値(生成時に算出)。 基盤アダプターがより良い制限値をサポートしていても、追加利用はできません。

デバイスには以下のコンテンツタイムラインプロパティがあります:

[[content device]]GPUDevice (読み取り専用)

このデバイスに関連付けられたコンテンツタイムラインGPUDevice インターフェース。

アダプター adapter から GPUDeviceDescriptor descriptor新しいデバイスを生成するには、以下のデバイスタイムライン手順を実行します:
  1. featuresdescriptor.requiredFeatures の値からなるセットとする。

  2. features"texture-formats-tier2" が含まれていれば:

    1. 追加"texture-formats-tier1"features に加える。

  3. features"texture-formats-tier1" が含まれていれば:

    1. 追加"rg11b10ufloat-renderable"features に加える。

  4. 追加"core-features-and-limits"features に加える。

  5. limitsを、すべての値がデフォルト値に設定されたsupported limitsオブジェクトとする。

  6. descriptor.requiredLimitsの各(key, value)ペアについて:

    1. valueundefinedでなく、かつlimits[key]より良い場合:

      1. limits[key]にvalueを設定。

  7. deviceデバイスオブジェクトとする。

  8. device.[[adapter]]adapterを設定。

  9. device.[[features]]featuresを設定。

  10. device.[[limits]]limitsを設定。

  11. deviceを返す。

ユーザーエージェントがデバイスへのアクセスを取り消す必要がある場合は、 lose the device(device, "unknown") をデバイスのデバイスタイムライン上で呼び出します。 この操作は、同タイムライン上でキューされている他の操作よりも先に実行される場合があります。

操作が失敗し、その副作用がデバイス上のオブジェクトの状態を可視的に変化させたり、内部実装/ドライバ状態を破損する可能性がある場合は、 その変更が可視化されるのを防ぐため、デバイスを失うべきです。

注意: アプリケーションが(destroy()で)明示的に開始しないすべてのデバイス喪失については、 ユーザーエージェントはlostプロミスが処理されている場合でも、開発者向け警告を無条件で表示するべきです。 これらのシナリオは稀であるべきですが、WebGPU APIの多くがアプリケーションのランタイムフローを中断しないため(検証エラーなし、ほとんどのPromiseは通常通り解決)、シグナルは開発者にとって重要です。

device失う(device, reason)には、以下のデバイスタイムライン手順を実行します:
  1. 無効化deviceを無効にする。

  2. device.[[content device]]コンテンツタイムラインで以下の手順を発行する:

    1. device.lostを新しいGPUDeviceLostInfoで解決し、 reasonreasonmessage実装定義値を設定する。

      注意: messageは不必要なユーザー/システム情報を開示すべきでなく、アプリケーションによってパースされるべきではありません。

  3. device失われた状態になるまで待機している未完了手順を完了する。

注意: 失われたデバイスからはエラーは生成されません。 詳細は§ 22 エラーとデバッグを参照。

タイムラインイベントをリッスンする eventdevice device上で、タイムラインtimelinestepsで処理するには:

その場合は、timeline上でstepsを発行する。

3.6. オプション機能

WebGPUのアダプターデバイス機能を持ちます。 これは、WebGPUの機能が実装ごとに異なることを示すもので、主にハードウェアやシステムソフトウェアの制約によるものです。 機能機能(feature)または制限(limit)のいずれかです。

ユーザーエージェントは、32個を超える識別可能な構成やバケットを公開してはなりません。

アダプターの機能は§ 4.2.1 アダプター機能保証に準拠しなければなりません。

サポートされている機能だけがrequestDevice()で要求可能です。 サポートされていない機能を要求すると失敗します。

デバイスの機能は"新しいデバイス"で決定され、アダプターのデフォルト(機能なし・デフォルトのsupported limits)から始まり、requestDevice()で要求された機能が加えられます。 これらの機能は、アダプターの機能に関係なく強制されます。

ここにはトラッキングベクトルがあります。 プライバシーの考慮事項については § 2.2.1 機器固有の機能と制限 を参照してください。

3.6.1. 機能

機能は、すべての実装でサポートされているわけではないWebGPUのオプション機能セットです。主にハードウェアやシステムソフトウェアの制約により左右されます。

すべての機能はオプションですが、アダプターはその可用性についてある程度の保証をします(§ 4.2.1 アダプター機能保証参照)。

デバイスは、生成時に決定された機能のみをサポートします(§ 3.6 オプション機能参照)。 API呼び出しは、これらの機能(アダプターの機能ではなく)に従って検証を行います。

GPUFeatureName feature有効(enabled for)であるとは、 GPUObjectBase objectにおいて、 object.[[device]].[[features]]feature含む場合のみです。

各機能が有効にする機能内容の説明は機能一覧を参照してください。

注意: 機能を有効化することが必ずしも望ましいとは限りません。有効化によってパフォーマンスに影響が出る場合があります。 このため、またデバイスや実装間の移植性向上のため、アプリケーションは実際に必要となる機能のみを要求するべきです。

3.6.2. 制限

制限は、デバイス上でWebGPUを利用する際の数値的な制約です。

注意: 「より良い」制限値を設定することが必ずしも望ましいとは限りません。パフォーマンスに影響が出る場合があります。 このため、また移植性向上のため、アプリケーションは本当に必要な場合のみデフォルトより良い制限値を要求するべきです。

各制限にはデフォルト値があります。

アダプターは常にデフォルトまたはより良い制限値をサポートすることが保証されています(§ 4.2.1 アダプター機能保証参照)。

デバイスは生成時に決定された制限値のみをサポートします(§ 3.6 オプション機能参照)。 API呼び出しは、これらの制限値(アダプターの制限値ではなく)に従って検証されます。より良い/悪い値は利用できません。

任意の制限値について、ある値は他の値よりも優れている場合があります。 優れている制限値は常に検証を緩和し、より多くのプログラムが有効となります。各制限クラスごとに「優れている」の定義があります。

制限値ごとに異なる制限クラスがあります:

最大値

制限はAPIへ渡される値の最大値を強制します。

高い値ほどより良い値です。

設定できるのはデフォルト値以上(≥ デフォルト値)のみです。 より低い値はデフォルト値に丸められます。

アライメント

制限はAPIへ渡される値の最小アライメント(値は制限値の倍数でなければならない)を強制します。

低い値ほどより良い値です。

設定できるのはデフォルト値以下かつ2の累乗(≤ デフォルト値)のみです。 2の累乗でない値は無効です。 より高い2の累乗値はデフォルト値に丸められます。

supported limitsオブジェクトは、WebGPUで定義されるすべての制限値を保持します:

制限名 制限クラス デフォルト値
maxTextureDimension1D GPUSize32 最大値 8192
size.width に指定できる最大値(テクスチャdimension "1d"指定時)。
maxTextureDimension2D GPUSize32 最大値 8192
size.widthおよび size.height に指定できる最大値(テクスチャdimension "2d"指定時)。
maxTextureDimension3D GPUSize32 最大値 2048
size.widthsize.height およびsize.depthOrArrayLayers に指定できる最大値(テクスチャdimension "3d"指定時)。
maxTextureArrayLayers GPUSize32 最大値 256
size.depthOrArrayLayers に指定できる最大値(テクスチャdimension "2d"指定時)。
maxBindGroups GPUSize32 最大値 4
GPUBindGroupLayoutsbindGroupLayouts に指定できる最大数(GPUPipelineLayout作成時)。
maxBindGroupsPlusVertexBuffers GPUSize32 最大値 24
バインドグループと頂点バッファスロットの同時利用最大数(空スロットも含む、最大インデックス以下のすべて)。createRenderPipeline()draw呼び出しで検証される。
maxBindingsPerBindGroup GPUSize32 最大値 1000
GPUBindGroupLayout作成時に利用できるバインディングインデックス数。

注意: この制限は規定ですが任意です。 デフォルトのバインディングスロット制限では、1つのバインドグループで1000個のバインディングは実際には利用不可能ですが、 GPUBindGroupLayoutEntry.binding の値としては999まで指定可能です。 実装はバインディング空間を配列として扱うことができ、メモリ使用量が過度にならない範囲で疎なマップ構造ではなく配列管理が可能となります。

maxDynamicUniformBuffersPerPipelineLayout GPUSize32 最大値 8
GPUBindGroupLayoutEntry のうち、動的オフセット付きユニフォームバッファがGPUPipelineLayout全体で利用できる最大数。 バインディングスロット制限を超える場合を参照。
maxDynamicStorageBuffersPerPipelineLayout GPUSize32 最大値 4
GPUBindGroupLayoutEntry のうち、動的オフセット付きストレージバッファがGPUPipelineLayout全体で利用できる最大数。 バインディングスロット制限を超える場合を参照。
maxSampledTexturesPerShaderStage GPUSize32 最大値 16
GPUShaderStage stageごとに、 GPUBindGroupLayoutEntry のうちサンプリングテクスチャがGPUPipelineLayout全体で利用できる最大数。 バインディングスロット制限を超える場合を参照。
maxSamplersPerShaderStage GPUSize32 最大値 16
GPUShaderStage stageごとに、 GPUBindGroupLayoutEntry のうちサンプラーがGPUPipelineLayout全体で利用できる最大数。 バインディングスロット制限を超える場合を参照。
maxStorageBuffersPerShaderStage GPUSize32 最大値 8
GPUShaderStage stageごとに、 GPUBindGroupLayoutEntry のうちストレージバッファがGPUPipelineLayout全体で利用できる最大数。 バインディングスロット制限を超える場合を参照。
maxStorageTexturesPerShaderStage GPUSize32 最大値 4
GPUShaderStage stageごとに、 GPUBindGroupLayoutEntry のうちストレージテクスチャがGPUPipelineLayout全体で利用できる最大数。 バインディングスロット制限を超える場合を参照。
maxUniformBuffersPerShaderStage GPUSize32 最大値 12
GPUShaderStage stageごとに、 GPUBindGroupLayoutEntry のうちユニフォームバッファがGPUPipelineLayout全体で利用できる最大数。 バインディングスロット制限を超える場合を参照。
maxUniformBufferBindingSize GPUSize64 最大値 65536 バイト
GPUBufferBinding.size の最大値( GPUBindGroupLayoutEntry entryentry.buffer?.type"uniform" の場合)。
maxStorageBufferBindingSize GPUSize64 最大値 134217728 バイト (128 MiB)
GPUBufferBinding.size の最大値( GPUBindGroupLayoutEntry entryentry.buffer?.type"storage" または "read-only-storage" の場合)。
minUniformBufferOffsetAlignment GPUSize32 アライメント 256 バイト
GPUBufferBinding.offset および setBindGroup()で指定する動的オフセットのアライメント( GPUBindGroupLayoutEntry entryentry.buffer?.type"uniform" の場合)。
minStorageBufferOffsetAlignment GPUSize32 アライメント 256 バイト
GPUBufferBinding.offset および setBindGroup()で指定する動的オフセットのアライメント( GPUBindGroupLayoutEntry entryentry.buffer?.type"storage" または "read-only-storage" の場合)。
maxVertexBuffers GPUSize32 最大値 8
buffers の最大数(GPURenderPipeline作成時)。
maxBufferSize GPUSize64 最大値 268435456 バイト (256 MiB)
size の最大値(GPUBuffer作成時)。
maxVertexAttributes GPUSize32 最大値 16
attributes の合計最大数(buffersを含む、GPURenderPipeline作成時)。
maxVertexBufferArrayStride GPUSize32 最大値 2048 バイト
arrayStride の最大値(GPURenderPipeline作成時)。
maxInterStageShaderVariables GPUSize32 最大値 16
ステージ間通信(頂点出力やフラグメント入力など)の入出力変数の最大数。
maxColorAttachments GPUSize32 最大値 8
GPURenderPipelineDescriptor.fragment.targetsGPURenderPassDescriptor.colorAttachmentsGPURenderPassLayout.colorFormats で指定できるカラーアタッチメント最大数。
maxColorAttachmentBytesPerSample GPUSize32 最大値 32
すべてのカラーアタッチメントに対し、レンダーパイプライン出力データの1サンプル(ピクセルまたはサブピクセル)保持に必要な最大バイト数。
maxComputeWorkgroupStorageSize GPUSize32 最大値 16384 バイト
計算ステージのworkgroupストレージで利用できる最大バイト数(シェーダーエントリポイントごと)。
maxComputeInvocationsPerWorkgroup GPUSize32 最大値 256
計算ステージのworkgroup_size各次元の積の最大値(シェーダーエントリポイントごと)。
maxComputeWorkgroupSizeX GPUSize32 最大値 256
計算ステージのworkgroup_sizeのX次元最大値(シェーダーエントリポイントごと)。
maxComputeWorkgroupSizeY GPUSize32 最大値 256
計算ステージのworkgroup_sizeのY次元最大値(シェーダーエントリポイントごと)。
maxComputeWorkgroupSizeZ GPUSize32 最大値 64
計算ステージのworkgroup_sizeのZ次元最大値(シェーダーエントリポイントごと)。
maxComputeWorkgroupsPerDimension GPUSize32 最大値 65535
dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ) の引数に指定できる最大値。
3.6.2.1. GPUSupportedLimits

GPUSupportedLimits は、アダプターまたはデバイスのサポートされる制限値を公開します。 GPUAdapter.limits および GPUDevice.limits を参照してください。

[Exposed=(Window, Worker), SecureContext]
interface GPUSupportedLimits {
    readonly attribute unsigned long maxTextureDimension1D;
    readonly attribute unsigned long maxTextureDimension2D;
    readonly attribute unsigned long maxTextureDimension3D;
    readonly attribute unsigned long maxTextureArrayLayers;
    readonly attribute unsigned long maxBindGroups;
    readonly attribute unsigned long maxBindGroupsPlusVertexBuffers;
    readonly attribute unsigned long maxBindingsPerBindGroup;
    readonly attribute unsigned long maxDynamicUniformBuffersPerPipelineLayout;
    readonly attribute unsigned long maxDynamicStorageBuffersPerPipelineLayout;
    readonly attribute unsigned long maxSampledTexturesPerShaderStage;
    readonly attribute unsigned long maxSamplersPerShaderStage;
    readonly attribute unsigned long maxStorageBuffersPerShaderStage;
    readonly attribute unsigned long maxStorageTexturesPerShaderStage;
    readonly attribute unsigned long maxUniformBuffersPerShaderStage;
    readonly attribute unsigned long long maxUniformBufferBindingSize;
    readonly attribute unsigned long long maxStorageBufferBindingSize;
    readonly attribute unsigned long minUniformBufferOffsetAlignment;
    readonly attribute unsigned long minStorageBufferOffsetAlignment;
    readonly attribute unsigned long maxVertexBuffers;
    readonly attribute unsigned long long maxBufferSize;
    readonly attribute unsigned long maxVertexAttributes;
    readonly attribute unsigned long maxVertexBufferArrayStride;
    readonly attribute unsigned long maxInterStageShaderVariables;
    readonly attribute unsigned long maxColorAttachments;
    readonly attribute unsigned long maxColorAttachmentBytesPerSample;
    readonly attribute unsigned long maxComputeWorkgroupStorageSize;
    readonly attribute unsigned long maxComputeInvocationsPerWorkgroup;
    readonly attribute unsigned long maxComputeWorkgroupSizeX;
    readonly attribute unsigned long maxComputeWorkgroupSizeY;
    readonly attribute unsigned long maxComputeWorkgroupSizeZ;
    readonly attribute unsigned long maxComputeWorkgroupsPerDimension;
};
3.6.2.2. GPUSupportedFeatures

GPUSupportedFeaturessetlikeインターフェースです。そのset entriesは、 アダプターまたはデバイスがサポートする機能GPUFeatureName 値です。GPUFeatureName enumのいずれかの文字列しか含めてはなりません。

[Exposed=(Window, Worker), SecureContext]
interface GPUSupportedFeatures {
    readonly setlike<DOMString>;
};
注意:
GPUSupportedFeaturesset entries型はDOMStringです。 これは、現行標準の後続リビジョンで追加された有効なGPUFeatureNameで、 ユーザーエージェントがまだ認識しないものでも、正常に扱えるようにするためです。 set entries型が GPUFeatureNameだった場合、 下記コードはTypeErrorを投げてしまい、falseを返せません:
未認識機能のサポート有無判定例:
if (adapter.features.has('unknown-feature')) {
    // Use unknown-feature
} else {
    console.warn('unknown-feature is not supported by this adapter.');
}
3.6.2.3. WGSLLanguageFeatures

WGSLLanguageFeaturesnavigator.gpu.wgslLanguageFeaturesで利用可能なsetlikeインターフェースです。 そのset entriesは、実装がサポートするWGSL言語拡張の文字列名です (アダプターやデバイスに関係なく判定されます)。

[Exposed=(Window, Worker), SecureContext]
interface WGSLLanguageFeatures {
    readonly setlike<DOMString>;
};
3.6.2.4. GPUAdapterInfo

GPUAdapterInfo はアダプターの識別情報を公開します。

GPUAdapterInfo のメンバーは、特定値の設定が保証されません。値がない場合、その属性は空文字("")を返します。 どの値を公開するかはユーザーエージェントの裁量であり、端末によっては値が一切設定されないことも十分あり得ます。 したがって、アプリケーションはGPUAdapterInfo の任意の値や値が未設定の場合も必ず扱えるようにする必要があります

アダプターのGPUAdapterInfoGPUAdapter.info およびGPUDevice.adapterInfoで公開されます。 この情報は不変です。 あるアダプターに対しては、各GPUAdapterInfo 属性はアクセスするたびに同じ値を返します。

注意: GPUAdapterInfo の属性は初回アクセス時点で不変ですが、実装は各属性の公開値を初回アクセスまで遅延決定しても構いません。

注意: 他のGPUAdapterインスタンス(同じ物理アダプターを表していても)でも、 GPUAdapterInfoの値が異なる場合があります。 ただし、特定のイベント(ページが追加の識別情報取得を許可された場合。現行標準では該当イベント定義なし)がない限り、値は同じにすべきです。

ここにはトラッキングベクトルがあります。 プライバシーの考慮事項については § 2.2.6 アダプター識別子 を参照してください。

[Exposed=(Window, Worker), SecureContext]
interface GPUAdapterInfo {
    readonly attribute DOMString vendor;
    readonly attribute DOMString architecture;
    readonly attribute DOMString device;
    readonly attribute DOMString description;
    readonly attribute unsigned long subgroupMinSize;
    readonly attribute unsigned long subgroupMaxSize;
    readonly attribute boolean isFallbackAdapter;
};

GPUAdapterInfo には以下の属性があります:

vendor, DOMString, 読み取り専用

アダプターのベンダー名(利用可能な場合)。なければ空文字。

architecture, DOMString, 読み取り専用

アダプターが属するGPUファミリー・クラス名(利用可能な場合)。なければ空文字。

device, DOMString, 読み取り専用

アダプターのベンダー固有識別子(利用可能な場合)。なければ空文字。

注意: これはアダプター種別を表す値(例:PCIデバイスID)です。シリアル番号など特定機器一意の値ではありません。

description, DOMString, 読み取り専用

ドライバが報告するアダプターの人間可読説明(利用可能な場合)。なければ空文字。

注意: description には整形が一切施されないため、パースは推奨されません。既知のドライバ問題回避など、GPUAdapterInfoで動作変更する場合は、他フィールドを利用すべきです。

subgroupMinSize, unsigned long, 読み取り専用

"subgroups" 機能がサポートされている場合、アダプターの最小サブグループサイズ。

subgroupMaxSize, unsigned long, 読み取り専用

"subgroups" 機能がサポートされている場合、アダプターの最大サブグループサイズ。

isFallbackAdapter, boolean, 読み取り専用

アダプターがフォールバックアダプターかどうか。

adapter adapter新しいアダプター情報を作成するには、以下のコンテンツタイムライン手順を実行します:
  1. adapterInfoを新しいGPUAdapterInfoとする。

  2. ベンダーが判明していれば、adapterInfo.vendor にベンダー名(正規化識別文字列)を設定する。プライバシー保護のため、ユーザーエージェントは空文字または適当なベンダー名(正規化識別文字列)にしてもよい。

  3. アーキテクチャが判明していれば、adapterInfo.architecture にアダプターが属するファミリー・クラス名(正規化識別文字列)を設定する。プライバシー保護のため、空文字または適当なアーキテクチャ名(正規化識別文字列)でもよい。

  4. デバイスが判明していれば、adapterInfo.device にベンダー固有識別子(正規化識別文字列)を設定する。プライバシー保護のため、空文字または適当な識別子(正規化識別文字列)でもよい。

  5. 説明が判明していれば、adapterInfo.description にドライバ報告の説明文を設定。プライバシー保護のため、空文字または適当な説明でもよい。

  6. "subgroups" がサポートされていれば、subgroupMinSize に最小サブグループサイズを設定。なければ4とする。

    注意: プライバシー保護のため、ユーザーエージェントは一部機能をサポートしないか、区別不能でも利用可能な値(例:すべて4にする)を返す場合がある。

  7. "subgroups" がサポートされていれば、subgroupMaxSize に最大サブグループサイズを設定。なければ128とする。

    注意: プライバシー保護のため、ユーザーエージェントは一部機能をサポートしないか、区別不能でも利用可能な値(例:すべて128にする)を返す場合がある。

  8. adapterInfo.isFallbackAdapteradapter.[[fallback]]で設定。

  9. adapterInfoを返す。

正規化識別文字列は次のパターンに従います:

[a-z0-9]+(-[a-z0-9]+)*

a-z 0-9 -
正規化識別文字列の有効例:
  • gpu

  • 3d

  • 0x3b2f

  • next-gen

  • series-x20-ultra

3.7. 拡張文書

「拡張文書」とは、新しい機能を説明する追加文書であり、非規定でありWebGPU/WGSL仕様の一部ではありません。 これらは本仕様を基盤として構築される機能を記述し、多くの場合新しいAPI機能フラグやWGSLのenableディレクティブ、他のドラフトWeb標準との連携を含みます。

WebGPUの実装は拡張機能を公開してはなりません。公開すると仕様違反となります。 新しい機能はWebGPU標準(本ドキュメント)やWGSL仕様に統合されるまで、WebGPU標準の一部にはなりません。

3.8. オリジン制限

WebGPUは画像、動画、キャンバスに保存された画像データへのアクセスを許可します。 シェーダーによってGPUへアップロードされたテクスチャ内容を間接的に推測できるため、クロスドメインメディアの利用には制限があります。

WebGPUは、オリジンがクリーンでない画像ソースのアップロードを禁止します。

これは、WebGPUで描画されたキャンバスのorigin-cleanフラグがfalseになることは決してないことも意味します。

画像・動画要素のCORSリクエスト発行については以下を参照してください:

3.9. タスクソース

3.9.1. WebGPUタスクソース

WebGPUは新しいタスクソースWebGPUタスクソース」を定義します。 これはuncapturederrorイベントおよびGPUDevice.lostに使用されます。

GPUDevice deviceに対し、グローバルタスクをキューするには、 コンテンツタイムライン上で手順stepsを使って:
  1. グローバルタスクをキューするWebGPUタスクソースで、deviceを生成したグローバルオブジェクトとstepsを指定)。

3.9.2. 自動期限切れタスクソース

WebGPUは新しいタスクソース自動期限切れタスクソース」を定義します。 これは特定オブジェクトの自動・タイマーによる期限切れ(破棄)に使用されます:

GPUDevice deviceに対し、自動期限切れタスクをキューするには、 コンテンツタイムライン上で手順stepsを使って:
  1. グローバルタスクをキューする自動期限切れタスクソースで、deviceを生成したグローバルオブジェクトとstepsを指定)。

自動期限切れタスクソースからのタスクは高優先度で処理すべきです。特に、キューされたらユーザー定義(JavaScript)タスクより先に実行すべきです。

注意:
この挙動はより予測可能であり、厳格さによって暗黙のライフタイムに関する誤った仮定を早期に検出しやすくなるため、移植性の高いアプリ開発に役立ちます。開発者は複数実装でのテストを強く推奨します。

実装ノート: 高優先度の期限切れ「タスク」は、実際のタスクを実行する代わりに、イベントループ処理モデル内の固定ポイントに追加手順を挿入する形でも有効です。

3.10. 色空間とエンコーディング

WebGPUはカラーマネジメントを提供しません。WebGPU内部の値(テクスチャ要素など)はすべて生の数値であり、カラーマネージされた値ではありません。

WebGPUは、カラーマネージされた出力(GPUCanvasConfiguration)や入力 (copyExternalImageToTexture()importExternalTexture())と連携します。 したがって、WebGPU数値と外部色値との間で色変換が必要となります。 各インターフェースポイントごとに、WebGPU数値が解釈されるエンコーディング(色空間、伝達関数、アルファ事前乗算)がローカルに定義されます。

WebGPUは、PredefinedColorSpace enumのすべての色空間を許可します。 各色空間はCSS定義に基づき拡張範囲を持ち、色空間外の値も表現可能です(色度・輝度両方)。

ガマット外の事前乗算RGBA値とは、R/G/Bチャネル値がアルファ値を超えるものです。例:事前乗算sRGB RGBA値[1.0, 0, 0, 0.5]は(非事前乗算)色[2, 0, 0]で50%アルファを表し、CSSではrgb(srgb 2 0 0 / 50%)。 sRGB色域外の色値同様、これは拡張色空間の定義済み点です(ただしアルファ0の場合は色がありません)。 ただし、この値を可視キャンバスへ出力する場合、結果は未定義です(GPUCanvasAlphaMode "premultiplied"参照)。

3.10.1. 色空間変換

色は、上記で定義された方法に従い、ある色空間での表現を別の色空間の表現に変換することで変換されます。

元の値にRGBAチャンネルが4つ未満の場合、欠損している緑/青/アルファチャンネルは順に0, 0, 1として補われ、その後に色空間/エンコーディング変換やアルファプリマルチ化処理が行われます。変換後に宛先が4チャンネル未満を必要とする場合は、余分なチャンネルは無視されます。

注意: グレースケール画像は一般的にその色空間内でRGB値(V, V, V)、またはRGBA値(V, V, V, A)として表現されます。

色は変換中に不可逆的にクランプされません:ある色空間から別の色空間へ変換する際、元の色値が宛先色空間のガマット範囲外の場合は、[0, 1]の範囲外の値になることがあります。例えばsRGBが宛先の場合、元がrgba16floatやDisplay-P3などの広色域だったり、プリマルチプライされてガマット外値を含んでいる場合に発生します。

同様に、元の値が高ビット深度(例:各成分16ビットのPNG)や拡張範囲(例:float16ストレージのcanvas)の場合でも、これらの色は色空間変換を通じて保持され、中間計算の精度も元データの精度以上となります。

3.10.2. 色空間変換省略

色空間・エンコーディング変換の元と先が同じならば、変換は不要です。一般に、変換の任意のステップが恒等関数(no-op)の場合、実装はパフォーマンスのため省略すべきです。

最適なパフォーマンスのため、アプリケーションは色空間やエンコーディング設定を工夫し、必要な変換数を最小化するべきです。 GPUCopyExternalImageSourceInfoの各種画像ソースに関して:

注意: これらの機能に依存する前に、各ブラウザの実装サポート状況を確認してください。

3.11. JavaScriptからWGSLへの数値変換

WebGPU APIのいくつかの部分(pipeline-overridable constants や レンダーパスのクリア値)は、WebIDL(doublefloat)の数値を受け取り、 WGSL値(bool, i32, u32, f32, f16)へ変換します。

IDL値idlValue(型double またはfloat)を WGSL型Tへ変換するには、 (TypeErrorを投げる可能性あり) 以下のデバイスタイムライン手順を実行します:

注意: このTypeErrorデバイスタイムラインで生成され、JavaScriptには表出しません。

  1. アサートidlValueは有限値である(unrestricted doubleunrestricted floatではないため)。

  2. vを、!によるidlValueECMAScript値への変換結果とする。

  3. もし Tbool の場合

    WGSL bool値を返します。これは ! を使い vIDL値boolean に変換した結果に対応します。

    注: このアルゴリズムは ECMAScript の値を IDL doublefloat に変換した後に呼ばれます。元の ECMAScript 値が数値でもブール値でもない []{} の場合、WGSL bool の結果は、元の値を IDL boolean に直接変換した場合と異なることがあります。

    もし Ti32 の場合

    WGSL i32値を返します。これは?を使いvIDL値型[EnforceRange] longに変換した結果に対応します。

    もし Tu32 の場合

    WGSL u32値を返します。これは?を使いvIDL値型[EnforceRange] unsigned longに変換した結果に対応します。

    もし Tf32 の場合

    WGSL f32値を返します。これは?を使いvIDL値floatに変換した結果に対応します。

    もし Tf16 の場合
    1. wgslF32を、?を使いvをIDL値型floatに変換したWGSL f32値とする。

    2. f16(wgslF32)、すなわちWGSL f32値を!f16に変換した結果(WGSL浮動小数点変換定義)を返す。

    注: 値がf32の範囲内なら、値がf16の範囲外でもエラーは発生しません。

GPUColor colorテクスチャフォーマットのテクセル値formatへ変換するには、 (TypeErrorを投げる可能性あり) 以下のデバイスタイムライン手順を実行します:

注意: このTypeErrorデバイスタイムラインで生成され、JavaScriptには表出しません。

  1. formatの各コンポーネント(assert:すべて同じ型)は:

    浮動小数点型または正規化型の場合

    Tf32とする。

    符号付き整数型の場合

    Ti32とする。

    符号なし整数型の場合

    Tu32とする。

  2. wgslColorをWGSL型vec4<T>とし、各RGBAチャネル値はcolorの値を ?WGSL型Tへ変換したもの。

  3. wgslColor§ 23.2.7 出力マージの変換規則でformatへ変換し、結果を返す。

    注意: 整数型以外の場合、値の選択は実装定義となる。 正規化型の場合、値は型の範囲にクランプされる。

注意: つまり、書き込まれる値はWGSLシェーダーがvec4f32, i32, u32)として出力した場合と同じになります。

4. 初期化

GPUオブジェクトは Window および WorkerGlobalScope コンテキストで利用でき、Navigator および WorkerNavigator インターフェースを通じて navigator.gpu で公開されます。

interface mixin NavigatorGPU {
    [SameObject, SecureContext] readonly attribute GPU gpu;
};
Navigator includes NavigatorGPU;
WorkerNavigator includes NavigatorGPU;

NavigatorGPU には以下の属性があります:

gpu, GPU, 読み取り専用

requestAdapter() などトップレベルエントリポイントを提供するグローバルシングルトン。

4.2. GPU

GPUはWebGPUへの入り口です。

[Exposed=(Window, Worker), SecureContext]
interface GPU {
    Promise<GPUAdapter?> requestAdapter(optional GPURequestAdapterOptions options = {});
    GPUTextureFormat getPreferredCanvasFormat();
    [SameObject] readonly attribute WGSLLanguageFeatures wgslLanguageFeatures;
};

GPUは以下のメソッドを持ちます:

requestAdapter(options)

ユーザーエージェントにアダプターを要求します。 ユーザーエージェントはアダプターを返すかどうか選択し、返す場合は指定オプションに従って決定します。

呼び出し元: GPU this.

引数:

GPU.requestAdapter(options)メソッドの引数
パラメータ Nullable Optional 説明
options GPURequestAdapterOptions アダプター選択基準。

戻り値: Promise<GPUAdapter?>

コンテンツタイムライン手順:

  1. contentTimelineを現在のコンテンツタイムラインとする。

  2. promise新しいPromiseとする。

  3. initialization stepsthisデバイスタイムラインで発行する。

  4. promiseを返す。

デバイスタイムライン initialization steps:
  1. 次の手順の要求はすべて満たされなければなりません

    1. options.featureLevel 機能レベル文字列でなければなりません。

    満たされ、かつユーザーエージェントがアダプター返却を選択した場合:

    1. adapterアダプターを、§ 4.2.2 アダプター選択ルールとoptionsの基準に従い、 § 4.2.1 アダプター機能保証に従って選択・初期化する:

      1. adapter.[[limits]]adapter.[[features]] をアダプターのサポート機能に応じて設定する。 adapter.[[features]] には"core-features-and-limits"が含まれていなければならない。

      2. adapterフォールバックアダプター基準を満たす場合は adapter.[[fallback]]trueに、それ以外はfalseにする。

      3. adapter.[[xrCompatible]]options.xrCompatible を設定する。

    それ以外の場合:

    1. adapternullにする。

  2. 以降の手順をcontentTimelineで発行する。

コンテンツタイムライン手順:
  1. adapternullでなければ:

    1. resolvepromiseを新しいGPUAdapteradapterをラップ)で解決する。

  2. それ以外はresolvepromisenullで解決する。

getPreferredCanvasFormat()

8bit深度・標準ダイナミックレンジコンテンツ表示に最適なGPUTextureFormatを返します。 返す値は"rgba8unorm" または "bgra8unorm" のみです。

返された値はformat として configure()GPUCanvasContext で呼ぶ際に渡すことで、関連するキャンバスの効率的な表示が保証されます。

注意: 画面表示されないキャンバスでは、このフォーマット利用が有利とは限りません。

呼び出し元: GPU this.

戻り値: GPUTextureFormat

コンテンツタイムライン手順:

  1. WebGPUキャンバス表示に最適な形式に応じて"rgba8unorm" または "bgra8unorm" のいずれかを返す。

GPUは以下の属性を持ちます:

wgslLanguageFeatures, WGSLLanguageFeatures, 読み取り専用

サポートされるWGSL言語拡張名。サポートされる言語拡張は自動的に有効化されます。

アダプターいつでも 期限切れになる可能性があります。システム状態に変更が生じ、requestAdapter() の結果に影響する場合、ユーザーエージェントはすべての既返却済み アダプター期限切れにすべきです。例:

注意: ユーザーエージェントは、システム状態変化がなくても(例:アダプター作成後数秒・数分後など)、アダプターを頻繁に期限切れにすることを選択できます。 これにより実際のシステム状態変化の隠蔽や、requestAdapter() を再度呼び出す必要性の認識向上につながります。 この状況になっても標準的なデバイスロス回復処理で復旧可能です。

ヒントなしでGPUAdapterを要求する例:
const gpuAdapter = await navigator.gpu.requestAdapter();

4.2.1. アダプター機能保証

GPUAdapterrequestAdapter() で返された場合、以下の保証が必要です:

4.2.2. アダプター選択

GPURequestAdapterOptions は、ユーザーエージェントに対してアプリケーションに適した構成のヒントを与えます。

dictionary GPURequestAdapterOptions {
    DOMString featureLevel = "core";
    GPUPowerPreference powerPreference;
    boolean forceFallbackAdapter = false;
    boolean xrCompatible = false;
};
enum GPUPowerPreference {
    "low-power",
    "high-performance",
};

GPURequestAdapterOptions には以下のメンバーがあります:

featureLevel, DOMString, デフォルト"core"

アダプター要求の「機能レベル」。

許可される機能レベル文字列値は:

"core"

効果なし。

"compatibility"

効果なし。

注意: この値は将来的に追加検証制約へのオプトイン用途で予約されています。現時点では使用しないでください。

powerPreference, GPUPowerPreference

システムの利用可能アダプターからどの種類のアダプターを選択するかのヒントを任意で指定します。

このヒント値は選択されるアダプターに影響する場合がありますが、アダプター返却有無には影響しません。

注意: このヒントの主な用途は、マルチGPU環境で使用するGPUを選択することです。 例えば一部ノートPCは低消費電力統合GPUと高性能離散GPUを持ちます。このヒントは選択GPUの電源設定にも影響する場合があります。

注意: バッテリー状態や外部ディスプレイ・着脱式GPUなどのハード構成により、同じpowerPreferenceでも異なるアダプターが選択される場合があります。 一般的には同一ハード構成・状態とpowerPreferenceなら同じアダプターが選ばれる傾向です。

以下のいずれかの値:

undefined(未指定時)

ユーザーエージェントへのヒントなし。

"low-power"

パフォーマンスより消費電力節約を優先する要求。

注意: 通常、描画性能制約がない場合(例:1fpsのみ描画、簡単なジオメトリやシェーダーのみ、HTMLキャンバス小サイズなど)はこれを使うべきです。 許容されるなら本値利用を推奨します。携帯機器のバッテリ寿命向上に大きく寄与します。

"high-performance"

消費電力よりパフォーマンスを優先する要求。

注意: この値を選択すると、デバイス生成時、ユーザーエージェントが電力節約のため低消費電力アダプターに切替え、デバイスロスを強制しやすくなります。 本当に必要な場合以外は指定を控えましょう。携帯機器のバッテリ寿命が大幅に低下する場合があります。

forceFallbackAdapter, boolean, デフォルトfalse

true指定時、フォールバックアダプターのみ返却可能。ユーザーエージェントがrequestAdapter()フォールバックアダプター未対応なら null で解決。

注意: requestAdapter()forceFallbackAdapterfalseでも他に適切なアダプターがなかった場合やユーザーエージェント判断で フォールバックアダプターを返す場合があります。 フォールバックアダプターでの動作を防ぎたい場合、info.isFallbackAdapter 属性を確認してからGPUDeviceを要求してください。

xrCompatible, boolean, デフォルトfalse

trueに設定すると、WebXRセッション向けの描画に最適なアダプターが返されるべきであることを示します。ユーザーエージェントやシステムがWebXRセッションをサポートしていない場合は、この値はアダプター選択時に無視されることがあります。

注意: xrCompatibletrue指定せずアダプター要求した場合、そのGPUDeviceWebXRセッション用描画に利用できません。

"high-performance" GPUAdapter を要求する例:
const gpuAdapter = await navigator.gpu.requestAdapter({
    powerPreference: 'high-performance'
});

4.3. GPUAdapter

GPUAdapterアダプターをカプセル化し、 その機能(featureslimits)を記述します。

GPUAdapter を取得するには、requestAdapter()を使います。

[Exposed=(Window, Worker), SecureContext]
interface GPUAdapter {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    [SameObject] readonly attribute GPUAdapterInfo info;

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

GPUAdapter には以下の不変プロパティがあります。

features, GPUSupportedFeatures, 読み取り専用

this.[[adapter]].[[features]]の値セット。

limits, GPUSupportedLimits, 読み取り専用

this.[[adapter]].[[limits]]の制限値。

info, GPUAdapterInfo, 読み取り専用

このGPUAdapterの下層物理アダプター情報。

同一GPUAdapterに対してはGPUAdapterInfoの値は常に一定です。

毎回同じオブジェクトが返されます。初回生成方法:

呼び出し元: GPUAdapter this.

戻り値: GPUAdapterInfo

コンテンツタイムライン手順:

  1. this.[[adapter]]に対して新しいアダプター情報を返す。

[[adapter]], 型 adapter, 読み取り専用

このGPUAdapterが参照するアダプター

GPUAdapter には以下のメソッドがあります:

requestDevice(descriptor)

アダプターからデバイスを要求します。

これは一度限りの操作であり、デバイスが返されたらアダプターは"consumed"状態になります。

呼び出し元: GPUAdapter this.

引数:

GPUAdapter.requestDevice(descriptor)メソッドの引数
パラメータ Nullable Optional 説明
descriptor GPUDeviceDescriptor 要求するGPUDeviceの詳細。

戻り値: Promise<GPUDevice>

コンテンツタイムライン手順:

  1. contentTimelineを現在のコンテンツタイムラインとする。

  2. promise新しいPromiseとする。

  3. adapterthis.[[adapter]]とする。

  4. initialization stepsthisデバイスタイムラインで発行する。

  5. promiseを返す。

デバイスタイムライン initialization steps:
  1. 次のいずれかの要件を満たしていない場合:

    満たさない場合、以降の手順をcontentTimelineで実行し終了:

    コンテンツタイムライン手順:
    1. rejectpromiseTypeErrorで解決。

    注意: このエラーは、ブラウザが機能名を全く認識しない(GPUFeatureName定義にない)場合と同じです。 ブラウザが機能をサポートしない場合と、特定アダプターが機能をサポートしない場合の動作が収束します。

  2. 次のすべての要件を満たさなければなりません

    1. adapter.[[state]]"consumed"であってはならない。

    2. descriptor.requiredLimitsの各[key, value](valueundefinedでないもの)について:

      1. keysupported limitsメンバー名でなければならない。

      2. valueadapter.[[limits]][key]より良い値であってはならない。

      3. keyクラスアライメントの場合、valueは2の累乗かつ232未満でなければならない。

      注意: keyが未認識の場合、valueundefinedでも開発者向け警告表示を検討すべきです。

    満たさない場合、以降の手順をcontentTimelineで実行し終了:

    コンテンツタイムライン手順:
    1. rejectpromiseOperationErrorで解決。

  3. adapter.[[state]]"expired"またはユーザーエージェントが要求を満たせない場合:

    1. deviceを新しいdeviceとする。

    2. Lose the device(device, "unknown").

    3. assertadapter.[[state]]"expired"である。

      注意: この場合、ユーザーエージェントはほぼすべての場合で開発者向け警告表示を検討すべきです。アプリケーションはrequestAdapter()から再初期化ロジックを行うべきです。

    それ以外の場合:

    1. devicedescriptorで記述された機能を持つ新しいデバイスとする。

    2. expireadapterを期限切れに。

  4. 以降の手順をcontentTimelineで発行する。

コンテンツタイムライン手順:
  1. gpuDeviceを新しいGPUDeviceインスタンスとする。

  2. gpuDevice.[[device]]deviceを設定。

  3. device.[[content device]]gpuDeviceを設定。

  4. gpuDevice.labeldescriptor.labelを設定。

  5. resolvepromisegpuDeviceで解決する。

    注意: アダプターが要求を満たせずデバイスが既に失われている場合は、device.lostpromiseより先に解決されています。

デフォルト機能・制限値でGPUDeviceを要求する例:
const gpuAdapter = await navigator.gpu.requestAdapter();
const gpuDevice = await gpuAdapter.requestDevice();

4.3.1. GPUDeviceDescriptor

GPUDeviceDescriptor はデバイス要求内容を記述します。

dictionary GPUDeviceDescriptor
         : GPUObjectDescriptorBase {
    sequence<GPUFeatureName> requiredFeatures = [];
    record<DOMString, (GPUSize64 or undefined)> requiredLimits = {};
    GPUQueueDescriptor defaultQueue = {};
};

GPUDeviceDescriptor には以下のメンバーがあります:

requiredFeatures, 型 sequence<GPUFeatureName>、デフォルト[]

デバイス要求で必要な機能を指定します。 アダプターがこれら機能を提供できない場合、要求は失敗します。

API呼び出しの検証では、指定した機能セットのみが利用可能であり、それ以外は利用不可です。

requiredLimits, record<DOMString, (GPUSize64 or undefined)>、デフォルト{}

デバイス要求で必要な制限値を指定します。 アダプターがこれら制限値を提供できない場合、要求は失敗します。

値がundefinedでない各キーはsupported limitsメンバー名でなければなりません。

生成されたデバイスのAPI呼び出しは、そのデバイスの厳密な制限値に従って検証されます(アダプターの制限値ではない。§ 3.6.2 制限参照)。

defaultQueue, GPUQueueDescriptor、デフォルト{}

デフォルトGPUQueueの記述内容。

サポートされていれば"texture-compression-astc"機能付きGPUDeviceを要求する例:
const gpuAdapter = await navigator.gpu.requestAdapter();

const requiredFeatures = [];
if (gpuAdapter.features.has('texture-compression-astc')) {
    requiredFeatures.push('texture-compression-astc')
}

const gpuDevice = await gpuAdapter.requestDevice({
    requiredFeatures
});
より高いmaxColorAttachmentBytesPerSample制限付きGPUDeviceを要求する例:
const gpuAdapter = await navigator.gpu.requestAdapter();

if (gpuAdapter.limits.maxColorAttachmentBytesPerSample < 64) {
    // 希望の制限値が未サポートの場合、より高い制限値を必要としないコードパスへフォールバックするか、
    // デバイスが最低要件を満たしていないことをユーザーに通知するなどの対応を取る。
}

// max color attachments bytes per sampleのより高い制限値を要求。
const gpuDevice = await gpuAdapter.requestDevice({
    requiredLimits: { maxColorAttachmentBytesPerSample: 64 },
});
4.3.1.1. GPUFeatureName

GPUFeatureNameは、 利用可能であればWebGPUの追加利用を許可する機能セットを識別します。

enum GPUFeatureName {
    "core-features-and-limits",
    "depth-clip-control",
    "depth32float-stencil8",
    "texture-compression-bc",
    "texture-compression-bc-sliced-3d",
    "texture-compression-etc2",
    "texture-compression-astc",
    "texture-compression-astc-sliced-3d",
    "timestamp-query",
    "indirect-first-instance",
    "shader-f16",
    "rg11b10ufloat-renderable",
    "bgra8unorm-storage",
    "float32-filterable",
    "float32-blendable",
    "clip-distances",
    "dual-source-blending",
    "subgroups",
    "texture-formats-tier1",
    "texture-formats-tier2",
    "primitive-index",
};

4.4. GPUDevice

GPUDeviceデバイスをカプセル化し、その機能を公開します。

GPUDeviceWebGPUインターフェースを生成するトップレベルインターフェースです。

GPUDeviceを取得するには、requestDevice()を使用します。

[Exposed=(Window, Worker), SecureContext]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    [SameObject] readonly attribute GPUAdapterInfo adapterInfo;

    [SameObject] readonly attribute GPUQueue queue;

    undefined destroy();

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});
    GPUExternalTexture importExternalTexture(GPUExternalTextureDescriptor descriptor);

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);
    Promise<GPUComputePipeline> createComputePipelineAsync(GPUComputePipelineDescriptor descriptor);
    Promise<GPURenderPipeline> createRenderPipelineAsync(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);

    GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

GPUDeviceは 以下の不変プロパティを持ちます:

features, GPUSupportedFeatures, 読み取り専用

このデバイスがサポートするGPUFeatureName 値のセット([[device]].[[features]])。

limits, GPUSupportedLimits, 読み取り専用

このデバイスがサポートする制限値([[device]].[[limits]])。

queue, GPUQueue, 読み取り専用

このデバイスの主キューGPUQueue

adapterInfo, GPUAdapterInfo, 読み取り専用

このGPUDeviceを生成した物理アダプターの情報。

同じGPUDeviceに対しては、 GPUAdapterInfoの値は常に一定です。

毎回同じオブジェクトが返されます。初回生成方法:

呼び出し元: GPUDevice this.

戻り値: GPUAdapterInfo

コンテンツタイムライン手順:

  1. this.[[device]].[[adapter]]に対して新しいアダプター情報を返す。

[[device]]GPUDeviceが参照するdeviceです。

GPUDeviceは以下のメソッドを持ちます:

destroy()

デバイスを破棄し、以降の操作を禁止します。 未完了の非同期操作は失敗します。

注意: デバイスは何度破棄しても有効です。

呼び出し元: GPUDevice this.

コンテンツタイムライン手順:

  1. このデバイスのすべてのGPUBufferunmap()する。

  2. 以降の手順をthisデバイスタイムラインで発行する。

  1. Lose the device(this.[[device]], "destroyed").

注意: このデバイスに対して以降の操作が一切キューされないため、実装は未完了の非同期操作やリソース割り当て(アンマップ直後のメモリ含む)を即座に中断・解放できます。

GPUDevice許可バッファ用途
GPUDevice許可テクスチャ用途

4.5.

より堅牢なGPUAdapter およびGPUDevice要求のエラーハンドリング例:
let gpuDevice = null;

async function initializeWebGPU() {
    // ユーザーエージェントがWebGPUをサポートしているか確認
    if (!('gpu' in navigator)) {
        console.error("ユーザーエージェントがWebGPUをサポートしていません。");
        return false;
    }

    // アダプター要求
    const gpuAdapter = await navigator.gpu.requestAdapter();

    // 適切なアダプターが見つからない場合、requestAdapterはnullで解決されることがある
    if (!gpuAdapter) {
        console.error('WebGPUアダプターが見つかりません。');
        return false;
    }

    // デバイス要求
    // オプション辞書に無効な値が渡された場合、promiseはrejectされる。
    // 必ずアダプターのfeaturesやlimitsを事前に確認してからrequestDevice()を呼ぶこと。
    gpuDevice = await gpuAdapter.requestDevice();

    // requestDeviceはnullを返さないが、何らかの理由で有効なデバイス要求が満たせない場合
    // 既に失われたデバイスとしてresolveされることがあり得る。
    // また、デバイスは作成後も様々な理由(ブラウザのリソース管理、ドライバ更新等)で
    // いつでも失われる可能性があるため、常にロストデバイスを適切に扱うこと。
    gpuDevice.lost.then((info) => {
        console.error(`WebGPUデバイスが失われました: ${info.message}`);

        gpuDevice = null;

        // デバイスロストの多くは一時的なものなので、アプリケーションは
        // 以前のデバイスが失われたら新規取得を試みるべき(意図的なdestroy理由以外)。
        // 前のデバイスで作成したWebGPUリソース(バッファ、テクスチャ等)は
        // 新しいデバイスで再作成する必要がある。
        if (info.reason != 'destroyed') {
            initializeWebGPU();
        }
    });

    onWebGPUInitialized();

    return true;
}

function onWebGPUInitialized() {
    // ここからWebGPUリソース作成処理を開始
}

initializeWebGPU();

5. バッファ

5.1. GPUBuffer

GPUBuffer はGPU操作で利用できるメモリブロックを表します。 データは線形レイアウトで格納されており、割り当て領域の各バイトは GPUBufferの先頭からのオフセットで参照可能ですが、 操作ごとにアライメント制約があります。一部のGPUBufferは マップ可能であり、対応するメモリブロックはArrayBuffer (マッピング)経由でアクセスできます。

GPUBuffercreateBuffer()で作成します。 バッファはmappedAtCreationを指定可能です。

[Exposed=(Window, Worker), SecureContext]
interface GPUBuffer {
    readonly attribute GPUSize64Out size;
    readonly attribute GPUFlagsConstant usage;

    readonly attribute GPUBufferMapState mapState;

    Promise<undefined> mapAsync(GPUMapModeFlags mode, optional GPUSize64 offset = 0, optional GPUSize64 size);
    ArrayBuffer getMappedRange(optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined unmap();

    undefined destroy();
};
GPUBuffer includes GPUObjectBase;

enum GPUBufferMapState {
    "unmapped",
    "pending",
    "mapped",
};

GPUBufferは 以下の不変プロパティを持ちます:

size, GPUSize64Out, 読み取り専用

GPUBuffer の割り当てサイズ(バイト単位)。

usage, GPUFlagsConstant, 読み取り専用

このGPUBufferで許可されている用途。

GPUBufferは 以下のコンテンツタイムラインプロパティを持ちます:

mapState, GPUBufferMapState, 読み取り専用

バッファの現在のGPUBufferMapState

"unmapped"

バッファがthis.getMappedRange()で利用できるようにマップされていません。

"pending"

バッファのマッピング要求が保留中です。 mapAsync()で検証失敗または成功する可能性があります。

"mapped"

バッファがマップされており、this.getMappedRange() が利用できます。

getter手順

コンテンツタイムライン手順:
  1. this.[[mapping]]nullでなければ、 "mapped"を返す。

  2. this.[[pending_map]]nullでなければ、 "pending"を返す。

  3. "unmapped"を返す。

[[pending_map]], 型 Promise<void> または null(初期値null

現在保留中のPromisemapAsync()呼び出し)を返します。

保留中のマップは常に1つしかありません。既に要求中の場合、mapAsync()は即座に拒否します。

[[mapping]], 型 active buffer mapping または null(初期値null

バッファが現在getMappedRange()で利用可能な場合のみ設定されます。 それ以外の場合はnullです([[pending_map]]があっても)。

active buffer mappingは以下のフィールドを持つ構造体です:

data, 型 Data Block

このGPUBufferのマッピングデータ。 このデータはArrayBufferビューを通じてアクセスされ、getMappedRange()で返され、viewsに格納されます。

mode, 型 GPUMapModeFlags

対応するmapAsync() またはcreateBuffer()呼び出しで指定されたGPUMapModeFlags

range, 型 タプル [unsigned long long, unsigned long long]

マップされたGPUBufferの範囲。

views, 型 list<ArrayBuffer>

アプリケーションにArrayBufferとして返されたビュー。 unmap()呼び出し時に切り離すため管理されます。

active buffer mappingを初期化するには、 mode modeとrange rangeで以下のコンテンツタイムライン手順を実行:
  1. sizerange[1] - range[0]とする。

  2. data? CreateByteDataBlock(size)で作成。

    注意:
    この操作はRangeErrorを投げることがあります。 一貫性・予測可能性のため:
    • その時点でnew ArrayBuffer()が成功するサイズは、この割り当ても成功すべき

    • その時点でnew ArrayBuffer()RangeError決定的に投げるサイズは、この割り当ても同様にすべき

  3. 以下を持つactive buffer mappingを返す:

バッファのマッピングとアンマッピング。
バッファのマッピング失敗。

GPUBufferは 以下のデバイスタイムラインプロパティを持ちます:

[[internal state]]

バッファの現在の内部状態:

"available"

(無効化されていなければ)キュー操作に利用可能。

"unavailable"

マップされているためキュー操作には利用不可。

"destroyed"

destroy()されたため、いかなる操作にも利用不可。

5.1.1. GPUBufferDescriptor

dictionary GPUBufferDescriptor
         : GPUObjectDescriptorBase {
    required GPUSize64 size;
    required GPUBufferUsageFlags usage;
    boolean mappedAtCreation = false;
};

GPUBufferDescriptor には以下のメンバーがあります:

size, GPUSize64

バッファのサイズ(バイト単位)。

usage, GPUBufferUsageFlags

バッファで許可される用途。

mappedAtCreation, boolean(デフォルトfalse

trueの場合、バッファは作成時にすでにマップされた状態となり、getMappedRange()が即座に呼び出し可能となります。mappedAtCreationtrueにしても、usageMAP_READMAP_WRITE を含めなくても有効です。 これはバッファの初期データを設定するために利用できます。

バッファ作成が最終的に失敗した場合でも、アンマップされるまではマップ範囲に書き込み/読み出しできるように見えることが保証されます。

5.1.2. バッファ用途

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUBufferUsage {
    const GPUFlagsConstant MAP_READ      = 0x0001;
    const GPUFlagsConstant MAP_WRITE     = 0x0002;
    const GPUFlagsConstant COPY_SRC      = 0x0004;
    const GPUFlagsConstant COPY_DST      = 0x0008;
    const GPUFlagsConstant INDEX         = 0x0010;
    const GPUFlagsConstant VERTEX        = 0x0020;
    const GPUFlagsConstant UNIFORM       = 0x0040;
    const GPUFlagsConstant STORAGE       = 0x0080;
    const GPUFlagsConstant INDIRECT      = 0x0100;
    const GPUFlagsConstant QUERY_RESOLVE = 0x0200;
};

GPUBufferUsage フラグはGPUBufferが作成後にどのように利用できるかを決定します:

MAP_READ

バッファは読み出し用にマップ可能です。(例:mapAsync()GPUMapMode.READを指定)

COPY_DSTとだけ組み合わせ可能です。

MAP_WRITE

バッファは書き込み用にマップ可能です。(例:mapAsync()GPUMapMode.WRITEを指定)

COPY_SRCとだけ組み合わせ可能です。

COPY_SRC

バッファはコピー操作のソースとして利用可能です。(例:copyBufferToBuffer()copyBufferToTexture()呼び出しのsource引数)

COPY_DST

バッファはコピーや書き込み操作の宛先として利用可能です。(例:copyBufferToBuffer()copyTextureToBuffer()呼び出しのdestination引数、あるいはwriteBuffer()ターゲット)

INDEX

バッファはインデックスバッファとして利用可能です。(例:setIndexBuffer()への渡し)

VERTEX

バッファは頂点バッファとして利用可能です。(例:setVertexBuffer()への渡し)

UNIFORM

バッファはユニフォームバッファとして利用可能です。(例:GPUBufferBindingLayout のバインドグループエントリで buffer.type"uniform"の場合)

STORAGE

バッファはストレージバッファとして利用可能です。(例:GPUBufferBindingLayout のバインドグループエントリで buffer.type"storage" または"read-only-storage"の場合)

INDIRECT

バッファは間接コマンド引数の保存に利用可能です。(例:indirectBuffer引数としてdrawIndirect()dispatchWorkgroupsIndirect() 呼び出しで利用)

QUERY_RESOLVE

バッファはクエリ結果の取得に利用可能です。(例:destination引数としてresolveQuerySet()呼び出しで使用)

5.1.3. バッファ作成

createBuffer(descriptor)

GPUBufferを作成します。

呼び出し元: GPUDevice this.

引数:

GPUDevice.createBuffer(descriptor)メソッドの引数。
パラメータ Nullable Optional 説明
descriptor GPUBufferDescriptor 作成するGPUBufferの記述。

戻り値: GPUBuffer

コンテンツタイムライン手順:

  1. b!新しいWebGPUオブジェクトの作成(this, GPUBuffer, descriptor)とする。

  2. b.sizedescriptor.sizeを設定する。

  3. b.usagedescriptor.usageを設定する。

  4. もしdescriptor.mappedAtCreationtrueなら:

    1. descriptor.sizeが4の倍数でない場合、 RangeErrorを投げる。

    2. b.[[mapping]]?active buffer mappingの初期化 (mode WRITE, range [0, descriptor.size]) を設定する。

  5. initialization stepsthisデバイスタイムラインで発行する。

  6. bを返す。

デバイスタイムライン initialization steps:
  1. 以下の要件が満たされない場合、 検証エラー生成無効化 b、return。

注意: バッファ作成が失敗し、descriptor.mappedAtCreationfalseの場合、 mapAsync()呼び出しは拒否されるため、マッピング用に割り当てられたリソースは破棄または再利用される可能性があります。

  1. もしdescriptor.mappedAtCreationtrueなら:

    1. b.[[internal state]] を"unavailable"に設定する。

    それ以外:

    1. b.[[internal state]] を"available"に設定する。

  2. bのデバイス割り当てを各バイト0で作成する。

    割り当てが副作用なしに失敗した場合、 メモリエラー生成無効化 b、return。

書き込み可能な128バイトのユニフォームバッファを作成する例:
const buffer = gpuDevice.createBuffer({
    size: 128,
    usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST
});

5.1.4. バッファ破棄

アプリケーションがGPUBufferを不要と判断した場合、destroy()を呼び出すことでガベージコレクション前にアクセスを失うことができます。バッファの破棄はマッピングも解除し、マッピング用に割り当てられたメモリも解放します。

注意: これにより、ユーザーエージェントは、そのバッファを使ったすべての操作が完了した時点でGPUメモリを回収できます。

GPUBufferは以下のメソッドを持ちます:

destroy()

GPUBufferを破棄します。

注意: バッファは何度破棄しても有効です。

呼び出し元: GPUBuffer this.

戻り値: undefined

コンテンツタイムライン手順:

  1. this.unmap()を呼び出す。

  2. 以降の手順をthis.[[device]]デバイスタイムラインで発行する。

デバイスタイムライン手順:
  1. this.[[internal state]] を "destroyed"に設定する。

注意: このバッファを使った以降の操作は一切キューできなくなるため、実装はリソース割り当て(アンマップ直後のメモリも含む)を即座に解放可能です。

5.2. バッファのマッピング

アプリケーションはGPUBufferをマッピングするよう要求でき、これによりArrayBufferGPUBufferの一部割り当て領域にアクセス可能となります。GPUBufferのマッピング要求はmapAsync()で非同期に行われ、ユーザーエージェントがGPUの利用完了を確認してからアプリケーションが内容にアクセスできるようにします。マップ中のGPUBufferはGPUで利用できず、unmap()でアンマップしないと、Queueタイムラインで作業登録できません。

一度GPUBufferがマップされると、アプリケーションはgetMappedRange()で範囲アクセスを同期的に要求できます。返されたArrayBufferunmap()(直接またはGPUBuffer.destroy()GPUDevice.destroy()経由)でのみdetach可能です。transferはできません。他の操作がそれを試みるとTypeErrorが投げられます。

typedef [EnforceRange] unsigned long GPUMapModeFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUMapMode {
    const GPUFlagsConstant READ  = 0x0001;
    const GPUFlagsConstant WRITE = 0x0002;
};

GPUMapMode フラグはGPUBuffermapAsync()でどのようにマップされるかを決定します:

READ

このフラグはMAP_READ用途で作成されたバッファにのみ有効です。

バッファがマップされると、getMappedRange()呼び出しはバッファの現行値を含むArrayBufferを返します。返されたArrayBufferの変更はunmap()呼び出し後に破棄されます。

WRITE

このフラグはMAP_WRITE用途で作成されたバッファにのみ有効です。

バッファがマップされると、getMappedRange()呼び出しはバッファの現行値を含むArrayBufferを返します。返されたArrayBufferの変更はGPUBufferunmap()呼び出し後に保存されます。

注意: MAP_WRITE用途のバッファはCOPY_SRC用途のみと組み合わせ可能なため、書き込み用マッピングではGPUで生成された値は返されません。返されるArrayBufferはデフォルト初期化(ゼロ)または前回マッピング時にウェブページで書き込まれたデータのみを含みます。

GPUBufferは以下のメソッドを持ちます:

mapAsync(mode, offset, size)

指定された範囲のGPUBufferをマップし、Promise が解決されるとGPUBufferの内容をgetMappedRange()でアクセスできるようになります。

返されたPromiseの解決はマップが完了したことのみを示し、 現行標準タイムライン上で見える他の操作の完了は保証しません。 特に、他のPromiseonSubmittedWorkDone()や他のmapAsync())が解決されていることは意味しません。

PromiseonSubmittedWorkDone())の解決は、 その呼び出し前に同じキューで排他的に使われたGPUBuffermapAsync() が完了していることを意味します。

呼び出し元: GPUBuffer this.

引数:

GPUBuffer.mapAsync(mode, offset, size)メソッドの引数。
パラメータ Nullable Optional 説明
mode GPUMapModeFlags バッファを読み取り/書き込みどちらでマップするか。
offset GPUSize64 マップ範囲の開始バイトオフセット。
size GPUSize64 マップする範囲のバイト数。

戻り値: Promise<undefined>

コンテンツタイムライン手順:

  1. contentTimelineを現在のコンテンツタイムラインとする。

  2. this.mapState"unmapped"でない場合:

    1. this.[[device]]デバイスタイムラインearly-reject stepsを発行する。

    2. PromiseをOperationErrorで拒否して返す。

  3. pを新しいPromiseとする。

  4. this.[[pending_map]]pを設定する。

  5. this.[[device]]デバイスタイムラインvalidation stepsを発行する。

  6. pを返す。

デバイスタイムライン early-reject steps:
  1. 検証エラーを生成する

  2. Return。

デバイスタイムライン validation steps:
  1. sizeundefinedの場合:

    1. rangeSizeにmax(0, this.size - offset)を設定。

    それ以外の場合:

    1. rangeSizesizeを設定。

  2. 以下の条件が満たされない場合:

    • this有効でなければならない。

    1. deviceLosttrueに設定。

    2. contentTimelinemap failure stepsを発行。

    3. Return。

  3. 以下の条件が満たされない場合:

    • this.[[internal state]] が"available"である。

    • offsetが8の倍数である。

    • rangeSizeが4の倍数である。

    • offset + rangeSizethis.size

    • modeGPUMapModeで定義されたビットのみを含む。

    • modeREADまたはWRITEのうち1つだけを含む。

    • modeREADを含む場合、this.usageMAP_READを含む必要がある。

    • modeWRITEを含む場合、this.usageMAP_WRITEを含む必要がある。

    それ以外の場合:

    1. deviceLostfalseに設定。

    2. contentTimelinemap failure stepsを発行。

    3. 検証エラーを生成する

    4. Return。

  4. this.[[internal state]] を"unavailable"に設定。

    注: バッファがマップされている間は、unmap()まで内容は変更されません。

  5. 次のいずれかのイベントが発生した時点(先に発生した方、またはすでに発生していれば):

    その後、this.[[device]]デバイスタイムラインで後続の手順を発行する。

デバイスタイムライン手順:
  1. this.[[device]]失われた場合はdeviceLosttrue、それ以外はfalseに設定。

    注: デバイス喪失は前ブロックとこの間でも起こり得ます。

  2. deviceLosttrueの場合:

    1. contentTimelinemap failure stepsを発行。

    それ以外の場合:

    1. internalStateAtCompletionthis.[[internal state]]とする。

      注: この時点でunmap()呼び出しでバッファが再び"available"になった場合、[[pending_map]]pと異なるため、以下のマッピングは成功しません。

    2. dataForMappedRegionthisoffsetからrangeSizeバイト分の内容を設定。

    3. contentTimelinemap success stepsを発行。

コンテンツタイムライン map success steps:
  1. this.[[pending_map]]pと異なる場合:

    注: unmap()によりマップがキャンセルされています。

    1. Assert pは拒否されている。

    2. Return。

  2. Assert pはpendingである。

  3. Assert internalStateAtCompletionは"unavailable"。

  4. mappingactive buffer mappingの初期化 (mode mode, range [offset, offset + rangeSize])で生成する。

    この割り当てに失敗した場合:

    1. this.[[pending_map]]nullにし、RangeErrorでpを拒否

    2. Return。

  5. mapping.dataの内容をdataForMappedRegionに設定する。

  6. this.[[mapping]]mappingを設定する。

  7. this.[[pending_map]]nullにし、pをresolveする。

コンテンツタイムライン map failure steps:
  1. this.[[pending_map]]pと異なる場合:

    注: unmap()によりマップがキャンセルされています。

    1. Assert pはすでに拒否されている。

    2. Return。

  2. Assert pはまだpendingである。

  3. this.[[pending_map]]nullに設定する。

  4. deviceLostがtrueの場合:

    1. pをAbortErrorで拒否

      注: unmap()でキャンセルされた場合も同じエラータイプです。

    それ以外の場合:

    1. pをOperationErrorで拒否

getMappedRange(offset, size)

指定したマップ範囲のArrayBufferを返します。内容はGPUBufferのものです。

呼び出し元: GPUBuffer this.

引数:

GPUBuffer.getMappedRange(offset, size)メソッドの引数。
パラメータ Nullable Optional 説明
offset GPUSize64 バッファ内容取得開始のバイトオフセット。
size GPUSize64 返すArrayBufferのバイトサイズ。

戻り値: ArrayBuffer

コンテンツタイムライン手順:

  1. sizeが指定されていなければ:

    1. rangeSizeをmax(0, this.size - offset)とする。

    指定されていればrangeSizesize

  2. 以下の条件が満たされない場合、OperationErrorを投げて終了:

    注: GPUBuffermappedAtCreation の場合、無効でも常にgetMappedRangeは有効です。現行標準タイムラインが無効性を認識できないためです。

  3. datathis.[[mapping]].dataとする。

  4. view! ArrayBufferの生成(サイズrangeSize、ポインタはdataの(offset - [[mapping]].range[0])バイト先を参照)とする。

    注: datamapAsync()createBuffer()ですでに割り当てられているため、ここでRangeErrorは投げられません。

  5. view.[[ArrayBufferDetachKey]]に"WebGPUBufferMapping"を設定する。

    注: TypeErrorは、unmap()以外でDetachArrayBufferしようとした場合に投げられる。

  6. viewthis.[[mapping]].viewsに追加する。

  7. viewを返す。

注: getMappedRange()でmapの状態確認なしに成功した場合、ユーザーエージェントは開発者向け警告を表示検討すべきです。mapAsync()の成功、mapState"mapped"、または後のonSubmittedWorkDone()成功を待つことで状態確認できる。

unmap()

マップされた範囲のアンマップを行い、内容をGPUで再び利用可能にします。

呼び出し元: GPUBuffer this.

戻り値: undefined

コンテンツタイムライン手順:

  1. this.[[pending_map]]nullでない場合:

    1. this.[[pending_map]]AbortErrorで拒否する。

    2. this.[[pending_map]]nullに設定

  2. this.[[mapping]]nullの場合:

    1. Return。

  3. ArrayBuffer abについて、this.[[mapping]].views内:

    1. DetachArrayBuffer(ab,"WebGPUBufferMapping")を実行

  4. bufferUpdatenullとする。

  5. this.[[mapping]].modeWRITEを含む場合:

    1. bufferUpdate = { data: this.[[mapping]].data, offset: this.[[mapping]].range[0] }とする。

    注: WRITEモードでない場合、アンマップ時にアプリケーションによるローカル変更は破棄され、後のマッピング内容には影響しない。

  6. this.[[mapping]]nullに設定

  7. 以降の手順をthis.[[device]]デバイスタイムラインで発行

デバイスタイムライン手順:
  1. 以下の条件が満たされない場合はreturn:

  2. Assert this.[[internal state]] は"unavailable"。

  3. bufferUpdatenullでなければ:

    1. this.[[device]].queueキュータイムラインで以下発行:

      キュータイムライン手順:
      1. thisbufferUpdate.offsetからbufferUpdate.dataで内容更新

  4. this.[[internal state]]を"available"に設定

6. テクスチャとテクスチャビュー

6.1. GPUTexture

テクスチャは、1d2d3d のデータ配列で構成され、各要素が複数の値を持つことで色などを表現できます。テクスチャは、作成時のGPUTextureUsage に応じて様々な方法で読み書きが可能です。例えば、レンダー/コンピュートパイプラインのシェーダからサンプリング・読み書きでき、レンダーパスの出力として書き込むこともできます。 内部的には、テクスチャは線形アクセスではなく多次元アクセスに最適化されたGPUメモリレイアウトで格納されていることが多いです。

1つのテクスチャは、1つ以上のテクスチャサブリソースから構成されます。 各サブリソースはミップマップレベルで一意に識別され、 2dテクスチャの場合のみ、配列レイヤー およびアスペクトでも識別されます。

テクスチャサブリソースサブリソースであり、それぞれが1つの利用スコープ内で異なる内部用途に利用できます。

ミップマップレベル内の各サブリソースは、 1つ下のレベルのリソースと比べて各空間次元で約半分のサイズです (論理ミップレベル別テクスチャ範囲参照)。 レベル0のサブリソースがテクスチャ本体の寸法となります。 より小さいレベルは通常、同じ画像の低解像度版の格納に用いられます。 GPUSamplerやWGSLは、 詳細度レベルの選択や補間を明示的または自動で行う仕組みを提供します。

"2d" テクスチャは配列レイヤーの配列になる場合があります。 各レイヤー内のサブリソースは他レイヤーの同じリソースと同サイズです。 2d以外のテクスチャでは全てのサブリソースの配列レイヤーインデックスは0です。

各サブリソースはアスペクトを持ちます。 カラーテクスチャはcolorのみです。 深度・ステンシルフォーマットのテクスチャは複数アスペクト(depthstencil)を持つ場合があり、 depthStencilAttachment"depth"バインディングなどで特殊な用途に使われます。

"3d" テクスチャは複数のスライス(各z値ごとの2次元画像)を持ちます。 スライスはサブリソースとは異なります。

[Exposed=(Window, Worker), SecureContext]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    undefined destroy();

    readonly attribute GPUIntegerCoordinateOut width;
    readonly attribute GPUIntegerCoordinateOut height;
    readonly attribute GPUIntegerCoordinateOut depthOrArrayLayers;
    readonly attribute GPUIntegerCoordinateOut mipLevelCount;
    readonly attribute GPUSize32Out sampleCount;
    readonly attribute GPUTextureDimension dimension;
    readonly attribute GPUTextureFormat format;
    readonly attribute GPUFlagsConstant usage;
};
GPUTexture includes GPUObjectBase;

GPUTexture には以下の不変プロパティがあります:

width, GPUIntegerCoordinateOut, 読み取り専用

このGPUTextureの幅。

height, GPUIntegerCoordinateOut, 読み取り専用

このGPUTextureの高さ。

depthOrArrayLayers, GPUIntegerCoordinateOut, 読み取り専用

このGPUTextureの深度またはレイヤー数。

mipLevelCount, GPUIntegerCoordinateOut, 読み取り専用

このGPUTextureのミップレベル数。

sampleCount, GPUSize32Out, 読み取り専用

このGPUTextureのサンプル数。

dimension, GPUTextureDimension, 読み取り専用

GPUTextureサブリソースごとのテクセルの次元。

format, GPUTextureFormat, 読み取り専用

このGPUTextureのフォーマット。

usage, GPUFlagsConstant, 読み取り専用

このGPUTextureで許可される用途。

[[viewFormats]], 型 sequence<GPUTextureFormat>

このGPUTextureに対して GPUTextureViewDescriptor.format として利用可能なGPUTextureFormatの集合。

GPUTexture には以下のデバイスタイムラインプロパティがあります:

[[destroyed]], 型 boolean, 初期値false

テクスチャが破棄された場合、いかなる操作にも利用できなくなり、基盤となるメモリも解放可能となります。

compute render extent(baseSize, mipLevel)

引数:

戻り値: GPUExtent3DDict

デバイスタイムライン手順:

  1. extentを新しいGPUExtent3DDict オブジェクトとする。

  2. extent.width にmax(1, baseSize.widthmipLevel)を設定。

  3. extent.height にmax(1, baseSize.heightmipLevel)を設定。

  4. extent.depthOrArrayLayers に1を設定。

  5. extentを返す。

論理ミップレベル別テクスチャ範囲とは、特定のミップレベルにおけるテクスチャのテクセル単位のサイズです。次の手順で算出されます:

Logical miplevel-specific texture extent(descriptor, mipLevel)

引数:

戻り値: GPUExtent3DDict

  1. extentを新しいGPUExtent3DDictオブジェクトとする。

  2. descriptor.dimension が次の場合:

    "1d"
    "2d"
    "3d"
  3. extentを返す。

物理ミップレベル別テクスチャ範囲とは、特定のミップレベルにおけるテクスチャのテクセル単位のサイズ(テクセルブロックを完全に構成するための余分なパディングを含む)です。次の手順で算出されます:

Physical miplevel-specific texture extent(descriptor, mipLevel)

引数:

戻り値: GPUExtent3DDict

  1. extentを新しいGPUExtent3DDictオブジェクトとする。

  2. logicalExtent論理ミップレベル別テクスチャ範囲(descriptor, mipLevel)を設定。

  3. descriptor.dimension が次の場合:

    "1d"
    "2d"
    "3d"
  4. extentを返す。

6.1.1. GPUTextureDescriptor

dictionary GPUTextureDescriptor
         : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
    sequence<GPUTextureFormat> viewFormats = [];
};

GPUTextureDescriptor には以下のメンバーがあります:

size, GPUExtent3D

テクスチャの幅・高さ・深度またはレイヤー数。

mipLevelCount, GPUIntegerCoordinate(デフォルト値1

このテクスチャが持つミップレベルの数。

sampleCount, GPUSize32(デフォルト値1

テクスチャのサンプル数。sampleCount > 1の場合はマルチサンプルテクスチャ。

dimension, GPUTextureDimension(デフォルト値"2d"

テクスチャが一次元か、二次元レイヤー配列か、三次元か。

format, GPUTextureFormat

テクスチャのフォーマット。

usage, GPUTextureUsageFlags

テクスチャの許可用途。

viewFormats, 型 sequence<GPUTextureFormat>(デフォルト値[]

このテクスチャでformat としてcreateView() を呼び出す際に許可される値(実際のformatを含む)。

注:
このリストにフォーマットを追加するとパフォーマンスに大きな影響が出る可能性があるため、不要な追加は避けてください。

実際の影響はシステム依存ですので、アプリケーションごとに様々なシステムで検証が必要です。 例えば、あるシステムではformatviewFormats"rgba8unorm-srgb" を入れると、"rgba8unorm" のテクスチャより最適でなくなる場合があります。他フォーマットや組み合わせでも同様の注意点があります。

このリストのフォーマットは、テクスチャのフォーマットとテクスチャビュー・フォーマット互換でなければなりません。

2つのGPUTextureFormat formatviewFormatは、以下の場合テクスチャビュー・フォーマット互換です:
  • formatviewFormatが等しい場合

  • formatviewFormatsrgb-srgbサフィックス)だけが異なる場合

enum GPUTextureDimension {
    "1d",
    "2d",
    "3d",
};
"1d"

一次元(幅のみ)のテクスチャ。"1d" テクスチャはミップマップ不可・マルチサンプル不可・圧縮/深度/ステンシル不可・レンダーターゲット不可です。

"2d"

幅・高さを持ち、レイヤーも持てるテクスチャ。

"3d"

幅・高さ・深度を持つテクスチャ。"3d" テクスチャはマルチサンプル不可・フォーマットは3d対応(プレーンカラーフォーマットや一部パック/圧縮フォーマット)のみ。

6.1.2. テクスチャ用途

typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUTextureUsage {
    const GPUFlagsConstant COPY_SRC          = 0x01;
    const GPUFlagsConstant COPY_DST          = 0x02;
    const GPUFlagsConstant TEXTURE_BINDING   = 0x04;
    const GPUFlagsConstant STORAGE_BINDING   = 0x08;
    const GPUFlagsConstant RENDER_ATTACHMENT = 0x10;
};

GPUTextureUsage のフラグは、GPUTexture の作成後の用途を決定します:

COPY_SRC

コピー操作のソースとして利用可能(例:source引数としてcopyTextureToTexture()copyTextureToBuffer())。

COPY_DST

コピー・書き込み操作のデスティネーションとして利用可能(例:destination引数としてcopyTextureToTexture()copyBufferToTexture()writeTexture()のターゲット)。

TEXTURE_BINDING

シェーダでサンプル用テクスチャとしてバインド可能(例:GPUTextureBindingLayoutのバインドグループエントリ)。

STORAGE_BINDING

シェーダでストレージテクスチャとしてバインド可能(例:GPUStorageTextureBindingLayoutのバインドグループエントリ)。

RENDER_ATTACHMENT

レンダーパスのカラー/深度・ステンシルアタッチメントとして利用可能(例:GPURenderPassColorAttachment.viewGPURenderPassDepthStencilAttachment.view)。

maximum mipLevel count(dimension, size)

引数:

  1. 最大次元値mを計算:

  2. floor(log2(m)) + 1 を返す。

6.1.3. テクスチャの作成

createTexture(descriptor)

GPUTextureを作成します。

呼び出し元: GPUDevice this.

引数:

GPUDevice.createTexture(descriptor)メソッドの引数。
パラメータ Nullable Optional 説明
descriptor GPUTextureDescriptor 作成するGPUTextureの説明。

戻り値: GPUTexture

コンテンツタイムライン手順:

  1. ? GPUExtent3D形状の検証(descriptor.size)。

  2. ? テクスチャフォーマット必要機能の検証descriptor.formatthis.[[device]])。

  3. ? 各viewFormats要素のテクスチャフォーマット必要機能の検証descriptor.viewFormatsthis.[[device]])。

  4. t! 新しいWebGPUオブジェクトの生成(this, GPUTexture, descriptor)とする。

  5. t.widthdescriptor.size.widthを設定。

  6. t.heightdescriptor.size.heightを設定。

  7. t.depthOrArrayLayersdescriptor.size.depthOrArrayLayersを設定。

  8. t.mipLevelCountdescriptor.mipLevelCountを設定。

  9. t.sampleCountdescriptor.sampleCountを設定。

  10. t.dimensiondescriptor.dimensionを設定。

  11. t.formatdescriptor.formatを設定。

  12. t.usagedescriptor.usageを設定。

  13. thisデバイスタイムラインinitialization stepsを発行。

  14. tを返す。

デバイスタイムライン initialization steps:
  1. 以下の条件が満たされない場合検証エラーの生成tの無効化、return。

  2. t.[[viewFormats]]descriptor.viewFormatsを設定。

  3. 各ブロックがゼロのビット表現と等価な等価テクセル表現になるよう、tのデバイス割り当てを作成。

    割り当てが副作用なしに失敗した場合、 メモリ不足エラー生成tの無効化、return。

GPUTextureDescriptorの検証(this, descriptor):

引数:

デバイスタイムライン手順:

  1. limitsthis.[[limits]]とする。

  2. 以下すべて満たせばtrue、そうでなければfalseを返す:

16x16、RGBA、2D、配列レイヤー1・ミップレベル1のテクスチャ生成例:
const texture = gpuDevice.createTexture({
    size: { width: 16, height: 16 },
    format: 'rgba8unorm',
    usage: GPUTextureUsage.TEXTURE_BINDING,
});

6.1.4. テクスチャの破棄

アプリケーションがGPUTexture を不要になった場合、 ガベージコレクション前にdestroy()を呼び出してアクセスを失わせることができます。

注: これにより、ユーザーエージェントはGPUTexture に関連付けられたGPUメモリを それまでに提出されたすべての操作が完了次第、回収できるようになります。

GPUTexture には次のメソッドがあります:

destroy()

GPUTextureを破棄します。

呼び出し元: GPUTexture this.

戻り値: undefined

現行標準タイムライン手順:

  1. 以降の手順をデバイスタイムラインで発行する。

デバイスタイムライン手順:
  1. this.[[destroyed]] をtrueに設定する。

6.2. GPUTextureView

GPUTextureView は、特定のGPUTextureが持つテクスチャサブリソースの部分集合へのビューです。

[Exposed=(Window, Worker), SecureContext]
interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

GPUTextureView には以下の不変プロパティがあります:

[[texture]], readonly

このビューが参照するGPUTexture

[[descriptor]], readonly

このテクスチャビューを記述するGPUTextureViewDescriptor

GPUTextureViewDescriptorのすべてのオプションフィールドが定義済みです。

[[renderExtent]], readonly

レンダー可能ビューの場合、描画時の有効なGPUExtent3DDict

注: この範囲はbaseMipLevelに依存します。

テクスチャビューviewサブリソース集合は、 [[descriptor]] descを用いて、 view.[[texture]] のサブリソースのうち、各サブリソースsが以下を満たすものです:

2つのGPUTextureView オブジェクトは、そのサブリソース集合が交差する場合に限りテクスチャビュー・エイリアスとなります。

6.2.1. テクスチャビューの作成

dictionary GPUTextureViewDescriptor
         : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureUsageFlags usage = 0;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount;
};

GPUTextureViewDescriptor には以下のメンバーがあります:

format, GPUTextureFormat

テクスチャビューのフォーマット。テクスチャ自体のformatか、 または作成時に指定したviewFormatsのいずれかでなければなりません。

dimension, GPUTextureViewDimension

テクスチャをどの次元でビューするか。

usage, GPUTextureUsageFlags(デフォルト値0

テクスチャビューの許可用途。テクスチャのusageフラグの部分集合でなければなりません。0の場合、テクスチャの全usageフラグをデフォルトとします。

注: ビューのformat がテクスチャの全usageに対応しない場合、デフォルトは失敗し、明示的にusage を指定する必要があります。

aspect, GPUTextureAspect(デフォルト値"all"

テクスチャビューからアクセス可能なaspect

baseMipLevel, GPUIntegerCoordinate(デフォルト値0

テクスチャビューからアクセス可能な最初(最詳細)のミップマップレベル。

mipLevelCount, GPUIntegerCoordinate

baseMipLevel から始まるミップマップレベル数。

baseArrayLayer, GPUIntegerCoordinate(デフォルト値0

テクスチャビューからアクセス可能な最初の配列レイヤーのインデックス。

arrayLayerCount, GPUIntegerCoordinate

baseArrayLayer から始まるアクセス可能な配列レイヤー数。

enum GPUTextureViewDimension {
    "1d",
    "2d",
    "2d-array",
    "cube",
    "cube-array",
    "3d",
};
"1d"

テクスチャを一次元画像としてビューします。

対応WGSL型:

  • texture_1d

  • texture_storage_1d

"2d"

テクスチャを単一の二次元画像としてビューします。

対応WGSL型:

  • texture_2d

  • texture_storage_2d

  • texture_multisampled_2d

  • texture_depth_2d

  • texture_depth_multisampled_2d

"2d-array"

テクスチャビューを二次元画像の配列としてビューします。

対応WGSL型:

  • texture_2d_array

  • texture_storage_2d_array

  • texture_depth_2d_array

"cube"

テクスチャをキューブマップとしてビューします。

ビューは6つの配列レイヤーを持ち、それぞれキューブの面([+X, -X, +Y, -Y, +Z, -Z])と以下の向きに対応します:

キューブマップ面。+U/+V軸は個々の面のテクスチャ座標、すなわち各面のテクセルコピーメモリレイアウトを示します。

注: 内側からビューした場合、+Xが右、+Yが上、+Zが前の左手座標系になります。

サンプリングはキューブマップの面をまたいでシームレスに行われます。

対応WGSL型:

  • texture_cube

  • texture_depth_cube

"cube-array"

テクスチャをn個のキューブマップのパック配列としてビューします。それぞれ6配列レイヤーで1つの"cube"ビューとして扱われ、合計で6n配列レイヤーとなります。

対応WGSL型:

  • texture_cube_array

  • texture_depth_cube_array

"3d"

テクスチャを三次元画像としてビューします。

対応WGSL型:

  • texture_3d

  • texture_storage_3d

GPUTextureAspect値はアスペクトの集合に対応します。 アスペクト集合は以下の各値ごとに定義されています。

enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only",
};
"all"

テクスチャフォーマットの利用可能な全アスペクトがテクスチャビューからアクセス可能になります。カラーフォーマットの場合colorアスペクトが、複合深度ステンシルフォーマットの場合はdepthとstencil両方が、単一アスペクトの深度・ステンシルフォーマットはそのアスペクトのみアクセス可能です。

アスペクト集合は[color, depth, stencil]です。

"stencil-only"

深度・ステンシルフォーマットのstencilアスペクトのみがテクスチャビューからアクセス可能です。

アスペクト集合は[stencil]です。

"depth-only"

深度・ステンシルフォーマットのdepthアスペクトのみがテクスチャビューからアクセス可能です。

アスペクト集合は[depth]です。

createView(descriptor)

GPUTextureViewを作成します。

注:
デフォルトではcreateView()は、 テクスチャ全体を表現できる次元でビューを作成します。例えば、createView()"2d"テクスチャ(レイヤー複数)に対して呼ぶと、 "2d-array" GPUTextureViewが作られます(たとえarrayLayerCountが1でも)。

レイヤー数が開発時に不明なソース由来テクスチャの場合は、createView() 呼び出し時に明示的なdimension を指定してシェーダ互換性を確保するのが推奨されます。

呼び出し元: GPUTexture this.

引数:

GPUTexture.createView(descriptor)メソッドの引数。
パラメータ Nullable Optional 説明
descriptor GPUTextureViewDescriptor 作成するGPUTextureViewの説明。

戻り値: view(型GPUTextureView

現行標準タイムライン手順:

  1. ? テクスチャフォーマット必要機能の検証descriptor.formatthis.[[device]])。

  2. view! 新しいWebGPUオブジェクト生成this, GPUTextureView, descriptor)とする。

  3. thisデバイスタイムラインinitialization stepsを発行。

  4. viewを返す。

デバイスタイムライン initialization steps:
  1. descriptorGPUTextureViewDescriptor既定値の解決this, descriptor)の結果を設定。

  2. 以下の条件が満たされない場合検証エラー生成viewの無効化、return。

  3. viewを新しいGPUTextureViewオブジェクトとする。

  4. view.[[texture]]thisを設定。

  5. view.[[descriptor]]descriptorを設定。

  6. もしdescriptor.usageRENDER_ATTACHMENTが含まれる場合:

    1. renderExtentcompute render extent([this.width, this.height, this.depthOrArrayLayers], descriptor.baseMipLevel)を設定。

    2. view.[[renderExtent]]renderExtentを設定。

GPUTextureView textureに対してGPUTextureViewDescriptor descriptorの既定値を解決する場合、以下のデバイスタイムライン手順を実行:
  1. resolveddescriptorのコピーとする。

  2. resolved.format指定されていない場合:

    1. formatGPUTextureAspectの解決format, descriptor.aspect)の結果とする。

    2. formatnullの場合:

      それ以外の場合:

      • resolved.formatformatを設定。

  3. resolved.mipLevelCount指定されていない場合: resolved.mipLevelCounttexture.mipLevelCountresolved.baseMipLevelを設定。

  4. resolved.dimension指定されていない場合、かつ texture.dimension が:

    "1d"

    resolved.dimension"1d"を設定。

    "2d"

    array layer countが1の場合:

    それ以外の場合:

    "3d"

    resolved.dimension"3d"を設定。

  5. resolved.arrayLayerCount指定されていない場合、かつ resolved.dimension が:

    "1d", "2d", または "3d"

    resolved.arrayLayerCount1を設定。

    "cube"

    resolved.arrayLayerCount6を設定。

    "2d-array" または"cube-array"

    resolved.arrayLayerCounttexture配列レイヤー数resolved.baseArrayLayerを設定。

  6. resolved.usage0の場合: resolved.usagetexture.usageを設定。

  7. resolvedを返す。

GPUTexture texture配列レイヤー数を決定する場合、以下の手順:
  1. texture.dimension が:

    "1d" または"3d"

    1を返す。

    "2d"

    texture.depthOrArrayLayersを返す。

6.3. テクスチャフォーマット

フォーマット名は、コンポーネントの順序、各コンポーネントのビット数、コンポーネントのデータ型を指定します。

フォーマットに-srgbサフィックスが付いている場合、シェーダ内で色値の読み書き時にsRGB変換(ガンマ⇔リニア)が適用されます。圧縮テクスチャフォーマットはfeaturesによって提供されます。命名規則は本規約に従い、テクスチャ名をプレフィックスとして使用します(例:etc2-rgba8unorm)。

テクセルブロックは、画素ベースのGPUTextureFormatテクスチャでは単一のアドレス可能な要素、 ブロックベース圧縮GPUTextureFormatテクスチャでは単一の圧縮ブロックです。

テクセルブロック幅およびテクセルブロック高さは、1つのテクセルブロックの寸法を指定します。

テクセルブロックコピーフットプリントは、あるGPUTextureFormatアスペクトについて、 テクセルコピー時に1つのテクセルブロックが占有するバイト数です(該当する場合)。

注: テクセルブロックメモリコストは、GPUTextureFormatの1つのテクセルブロックを格納するのに必要なバイト数です。全てのフォーマットで厳密には定義されていません。 この値は参考情報であり、規定値ではありません。

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16unorm",
    "r16snorm",
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16unorm",
    "rg16snorm",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb9e5ufloat",
    "rgb10a2uint",
    "rgb10a2unorm",
    "rg11b10ufloat",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16unorm",
    "rgba16snorm",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth/stencil formats
    "stencil8",
    "depth16unorm",
    "depth24plus",
    "depth24plus-stencil8",
    "depth32float",

    // "depth32float-stencil8" feature
    "depth32float-stencil8",

    // BC compressed formats usable if "texture-compression-bc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "bc1-rgba-unorm",
    "bc1-rgba-unorm-srgb",
    "bc2-rgba-unorm",
    "bc2-rgba-unorm-srgb",
    "bc3-rgba-unorm",
    "bc3-rgba-unorm-srgb",
    "bc4-r-unorm",
    "bc4-r-snorm",
    "bc5-rg-unorm",
    "bc5-rg-snorm",
    "bc6h-rgb-ufloat",
    "bc6h-rgb-float",
    "bc7-rgba-unorm",
    "bc7-rgba-unorm-srgb",

    // ETC2 compressed formats usable if "texture-compression-etc2" is both
    // supported by the device/user agent and enabled in requestDevice.
    "etc2-rgb8unorm",
    "etc2-rgb8unorm-srgb",
    "etc2-rgb8a1unorm",
    "etc2-rgb8a1unorm-srgb",
    "etc2-rgba8unorm",
    "etc2-rgba8unorm-srgb",
    "eac-r11unorm",
    "eac-r11snorm",
    "eac-rg11unorm",
    "eac-rg11snorm",

    // ASTC compressed formats usable if "texture-compression-astc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "astc-4x4-unorm",
    "astc-4x4-unorm-srgb",
    "astc-5x4-unorm",
    "astc-5x4-unorm-srgb",
    "astc-5x5-unorm",
    "astc-5x5-unorm-srgb",
    "astc-6x5-unorm",
    "astc-6x5-unorm-srgb",
    "astc-6x6-unorm",
    "astc-6x6-unorm-srgb",
    "astc-8x5-unorm",
    "astc-8x5-unorm-srgb",
    "astc-8x6-unorm",
    "astc-8x6-unorm-srgb",
    "astc-8x8-unorm",
    "astc-8x8-unorm-srgb",
    "astc-10x5-unorm",
    "astc-10x5-unorm-srgb",
    "astc-10x6-unorm",
    "astc-10x6-unorm-srgb",
    "astc-10x8-unorm",
    "astc-10x8-unorm-srgb",
    "astc-10x10-unorm",
    "astc-10x10-unorm-srgb",
    "astc-12x10-unorm",
    "astc-12x10-unorm-srgb",
    "astc-12x12-unorm",
    "astc-12x12-unorm-srgb",
};

"depth24plus" および"depth24plus-stencil8" フォーマットのdepth成分は、24ビットdepth値または"depth32float" 値として実装される場合があります。

stencil8 フォーマットは実際の"stencil8"、または"depth24stencil8"(depthアスペクトは非表示・アクセス不可)として実装される場合があります。

注:
depth32floatチャンネルの精度はすべての値において24ビットdepthチャンネルより高いですが、 表現可能な値集合は完全なスーパーセットではないことに注意してください。

フォーマットがレンダー可能であるとは、カラー・レンダー可能フォーマットまたは深度・ステンシルフォーマットの場合です。 § 26.1.1 プレーンカラーフォーマットRENDER_ATTACHMENT 機能を持つものはカラー・レンダー可能フォーマットです。他はカラー・レンダー可能フォーマットではありません。 深度・ステンシルフォーマットはすべてレンダー可能です。

レンダー可能フォーマットは、レンダーパイプラインのブレンディングで使用可能な場合ブレンド可能にもなります。 § 26.1 テクスチャフォーマットの機能参照。

フォーマットがフィルタ可能であるとは、 GPUTextureSampleType "float""unfilterable-float"のみでない)をサポートし、 "filtering" GPUSamplerで利用可能な場合です。 § 26.1 テクスチャフォーマットの機能参照。

GPUTextureAspectの解決(format, aspect)

引数:

戻り値: GPUTextureFormat またはnull

  1. aspectが:

    "all"

    formatを返す。

    "depth-only"
    "stencil-only"

    formatがdepth-stencil-formatの場合: formatアスペクト専用フォーマット§ 26.1.2 深度ステンシルフォーマット)または aspectが存在しなければnullを返す。

  2. nullを返す。

一部のテクスチャフォーマットの使用にはGPUDeviceでfeatureを有効化する必要があります。 新フォーマットは仕様に追加される場合があるため、enum値が実装で未知な場合もあります。 実装間の挙動を揃えるため、featureが有効でない場合にフォーマットを使おうとすると例外が投げられます(未対応フォーマット時と同じ挙動)。

§ 26.1 テクスチャフォーマットの機能で、どのGPUTextureFormatがfeature必須か確認できます。

テクスチャフォーマット必要機能の検証(GPUTextureFormat format, 論理device device) の現行標準タイムライン手順:
  1. formatがfeature必須で、device.[[features]] がfeatureを含まない場合:

    1. TypeErrorを投げる。

6.4. GPUExternalTexture

GPUExternalTexture は外部動画フレームをラップするサンプル可能な2Dテクスチャです。 不変のスナップショットであり、その内容はWebGPU内外(動画フレームの進行など)で変化しません。

GPUExternalTextureexternalTexture バインドグループレイアウトエントリメンバーでバインド可能です。 このメンバーは複数のバインディングスロットを使用します(詳細はそちら参照)。

注:
GPUExternalTexture はインポート元のコピーなしで実装できる場合もありますが、 実装依存です。 基盤表現の所有権は排他または他オーナー(動画デコーダ等)との共有の場合もあり、アプリケーションからは不可視です。

外部テクスチャの基盤表現は(正確なサンプリング挙動以外)観測不可ですが、一般的には次が含まれます:

実装内部の構成は時期・システム・UA・メディアソース・同一動画内フレーム間でも一貫しない場合があります。 多様な表現に対応するため、各外部テクスチャで以下を保守的にバインディングします:

[Exposed=(Window, Worker), SecureContext]
interface GPUExternalTexture {
};
GPUExternalTexture includes GPUObjectBase;

GPUExternalTexture には以下の不変プロパティがあります:

[[descriptor]], 型 GPUExternalTextureDescriptor, 読み取り専用

このテクスチャ作成時のディスクリプタ。

GPUExternalTexture には以下の不変プロパティがあります:

[[expired]], 型boolean、初期値false

オブジェクトが期限切れ(利用不可)かどうか。

注: [[destroyed]]スロットと似ているが、こちらはtrueからfalseに戻る場合もある。

6.4.1. 外部テクスチャのインポート

外部テクスチャは外部動画オブジェクトからimportExternalTexture() を用いて作成します。

HTMLVideoElement から作成された外部テクスチャは、他のリソースのように手動やガベージコレクションではなく、インポート後にタスク内で自動的に期限切れ(破棄)となります。 外部テクスチャが期限切れになると、その[[expired]] スロットがtrueに変わります。

VideoFrame から作成された外部テクスチャは、元のVideoFrameclose(明示的にclose()呼び出し、または他の手段)された時のみ期限切れ(破棄)となります。

注:decode() でも述べられている通り、著者はデコーダの停止を防ぐため、出力VideoFrameclose()推奨します。 インポート後のVideoFrame がcloseされずに破棄された場合、インポート済みGPUExternalTexture オブジェクトが生きている限り、VideoFrameも生き続けます。 両方とも破棄されるまでVideoFrameはガベージコレクトされません。 ガベージコレクションは予測できないため、これでもビデオデコーダが停止する可能性があります。

GPUExternalTexture が期限切れになると、importExternalTexture() を再度呼び出す必要があります。 ただし、ユーザーエージェントは期限切れを解除し、同じGPUExternalTexture を返す場合があります(新しいものを生成しない)。これは、アプリケーションの実行が動画フレームレート(例:requestVideoFrameCallback()使用)と一致しない限り、一般的に起こります。 同じオブジェクトが再び返された場合、比較は等しくなり、以前のオブジェクトを参照しているGPUBindGroupGPURenderBundleなどは引き続き使用可能です。

dictionary GPUExternalTextureDescriptor
         : GPUObjectDescriptorBase {
    required (HTMLVideoElement or VideoFrame) source;
    PredefinedColorSpace colorSpace = "srgb";
};

GPUExternalTextureDescriptor 辞書には以下のメンバーがあります:

source, (HTMLVideoElement or VideoFrame)

外部テクスチャをインポートする動画ソース。ソースサイズは外部ソース寸法表に従って決定されます。

colorSpace, PredefinedColorSpace(デフォルト値"srgb"

source の画像内容を読み込み時に変換する色空間。

importExternalTexture(descriptor)

指定した画像ソースをラップしたGPUExternalTextureを作成します。

呼び出し元: GPUDevice this.

引数:

GPUDevice.importExternalTexture(descriptor) メソッドの引数。
パラメータ Nullable Optional 説明
descriptor GPUExternalTextureDescriptor 外部画像ソースオブジェクト(および作成オプション)を指定。

戻り値: GPUExternalTexture

現行標準タイムライン手順:

  1. sourcedescriptor.sourceとする。

  2. 現在のsource画像内容が、同じdescriptorlabel除く)で以前に呼び出された importExternalTexture() と同じであり、UAが再利用を選択した場合:

    1. previousResultを以前返されたGPUExternalTextureとする。

    2. previousResult.[[expired]]falseにし、基盤リソースの所有権を更新する。

    3. resultpreviousResultとする。

    注: これにより、アプリケーションが重複インポートを検出し、依存オブジェクト(GPUBindGroupなど)を再生成せずに済みます。 実装は、1つのフレームが複数GPUExternalTextureでラップされるケースにも対応する必要があります(インポートメタデータcolorSpaceは同一フレームでも変更可能)。

    それ以外の場合:

    1. sourceorigin-cleanでない場合、 SecurityErrorを投げてreturn。

    2. usability? 画像引数の利用性の確認(source)とする。

    3. usabilitygoodでない場合:

      1. 検証エラー生成

      2. 無効化された GPUExternalTextureを返す。

    4. dataを、現在のsource画像内容をdescriptor.colorSpace へ非プリマルチアルファで変換した結果とする。

      この変換で[0, 1]範囲外の値になる場合があります。クランプが必要ならサンプリング後に行えます。

      注: コピーのように記述されていますが、実際は読み取り専用の基盤データと変換用メタデータへの参照として実装可能です。

    5. resultdataをラップした新しいGPUExternalTextureオブジェクトとする。

  3. sourceHTMLVideoElementの場合、 自動期限切れタスクをキュー(device this、次の手順):

    1. result.[[expired]]trueにし、基盤リソースの所有権を解放する。

    注: HTMLVideoElement はテクスチャをサンプリングする同じタスクでインポートすること(通常requestVideoFrameCallbackrequestAnimationFrame()を使う)。 そうしないと、アプリケーションが使い終わる前にこれらの手順でテクスチャが破棄される可能性があります。

  4. sourceVideoFrameの場合、 sourcecloseされた時、次の手順を実行:

    1. result.[[expired]]trueにする。

  5. result.labeldescriptor.labelを設定。

  6. resultを返す。

ページアニメーションフレームレートでvideo要素外部テクスチャを用いて描画する例:
const videoElement = document.createElement('video');
// ... videoElementのセットアップ、ready待ち ...

function frame() {
    requestAnimationFrame(frame);

    // 毎アニメーションフレームで必ず再インポート。importは期限切れの可能性が高い。
    // ブラウザは過去フレームをキャッシュ・再利用する場合があり、その際
    // 同じGPUExternalTextureオブジェクトを再び返すことがある。
    // この場合、古いバインドグループも有効。
    const externalTexture = gpuDevice.importExternalTexture({
        source: videoElement
    });

    // ... externalTextureで描画 ...
}
requestAnimationFrame(frame);
requestVideoFrameCallbackが使える場合、動画のフレームレートでvideo要素外部テクスチャを用いて描画する例:
const videoElement = document.createElement('video');
// ... videoElementのセットアップ ...

function frame() {
    videoElement.requestVideoFrameCallback(frame);

    // フレーム進行が確実なため、毎回再インポート
    const externalTexture = gpuDevice.importExternalTexture({
        source: videoElement
    });

    // ... externalTextureで描画 ...
}
videoElement.requestVideoFrameCallback(frame);

6.5. 外部テクスチャバインディングのサンプリング

externalTexture バインディングポイントは、GPUExternalTexture オブジェクト(動画など動的画像ソース)をバインドできます。また、GPUTextureGPUTextureViewにも対応しています。

注: GPUTextureGPUTextureViewexternalTextureバインディングにバインドした場合、 RGBA単一プレーン・クロップ/回転/色変換なしのGPUExternalTexture と同じ扱いになります。

外部テクスチャはWGSLではtexture_externalで表され、textureLoadtextureSampleBaseClampToEdgeで読み取れます。

textureSampleBaseClampToEdgeに渡すsamplerは基盤テクスチャのサンプリングに使われます。

バインディングリソース型GPUExternalTextureの場合、 結果はcolorSpaceで指定された色空間となります。 実装依存で、サンプラー(およびフィルタリング)が基底値から指定色空間への変換の前後どちらで適用されるかは異なります。

注: 内部表現がRGBAプレーンの場合、サンプリングは通常の2Dテクスチャと同様です。 複数プレーン(例:Y+UV)の場合、サンプラーは各基盤テクスチャを個別にサンプリングし、YUV→指定色空間変換前に適用されます。

7. サンプラー

7.1. GPUSampler

GPUSampler はシェーダでテクスチャリソースデータを解釈するための変換・フィルタ情報を符号化します。

GPUSamplercreateSampler() で作成されます。

[Exposed=(Window, Worker), SecureContext]
interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

GPUSampler には以下の不変プロパティがあります:

[[descriptor]], 型 GPUSamplerDescriptor、読み取り専用

このGPUSampler の作成時に使用したGPUSamplerDescriptor

[[isComparison]], 型 boolean、読み取り専用

このGPUSampler が比較サンプラーとして利用されるかどうか。

[[isFiltering]], 型 boolean、読み取り専用

このGPUSampler がテクスチャの複数サンプルを重み付けするかどうか。

7.1.1. GPUSamplerDescriptor

GPUSamplerDescriptorGPUSampler の作成時オプションを指定します。

dictionary GPUSamplerDescriptor
         : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUMipmapFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 32;
    GPUCompareFunction compare;
    [Clamp] unsigned short maxAnisotropy = 1;
};
addressModeU, GPUAddressMode(デフォルト値"clamp-to-edge"
addressModeV, GPUAddressMode(デフォルト値"clamp-to-edge"
addressModeW, GPUAddressMode(デフォルト値"clamp-to-edge"

テクスチャの幅・高さ・深度座標それぞれのアドレスモード指定。

magFilter, GPUFilterMode(デフォルト値"nearest"

サンプリング領域が1テクセル以下のときのサンプリング動作。

minFilter, GPUFilterMode(デフォルト値"nearest"

サンプリング領域が1テクセルより大きいときのサンプリング動作。

mipmapFilter, GPUMipmapFilterMode(デフォルト値"nearest"

ミップマップレベル間のサンプリング動作。

lodMinClamp, float(デフォルト値0
lodMaxClamp, float(デフォルト値32

テクスチャサンプリング時に内部的に使われる最小・最大レベルオブディテール

compare, GPUCompareFunction

指定した場合、サンプラーは指定GPUCompareFunctionで比較サンプラーになります。

注: 比較サンプラーはフィルタリング使用可能ですが、結果は実装依存・通常のフィルタリングルールと異なる場合があります。

maxAnisotropy, unsigned short(デフォルト値1

サンプラーで使われる最大異方性値クランプ指定。maxAnisotropy > 1かつ実装が対応している場合、異方性フィルタリング有効。

異方性フィルタリングは斜め視点テクスチャの画質向上。maxAnisotropy 値が高いほどフィルタリング時の最大異方性比。

注:
多くの実装はmaxAnisotropy を1~16の範囲でサポート。指定値はプラットフォーム最大値にクランプされます。

フィルタリングの具体的挙動は実装依存。

レベルオブディテール(LOD)はテクスチャサンプリング時に選択されるミップレベルを示します。シェーダのtextureSampleLevel等で明示指定、またはテクスチャ座標の微分から暗黙決定されます。

注: 暗黙LOD計算例はScale Factor Operation, LOD Operation and Image Level SelectionVulkan 1.3仕様)参照。

GPUAddressMode はサンプリングテクセルがテクスチャ範囲外のときのサンプラー動作指定。

enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat",
};
"clamp-to-edge"

テクスチャ座標は0.0~1.0の範囲にクランプ。

"repeat"

テクスチャ座標はテクスチャの反対側へラップ。

"mirror-repeat"

テクスチャ座標は反対側へラップしつつ、座標の整数部が奇数のときテクスチャを反転。

GPUFilterMode およびGPUMipmapFilterMode はサンプリング領域が1テクセルちょうどでない場合のサンプラー動作指定。

注: 各種フィルターモードでどのテクセルがサンプルされるかの例はTexel FilteringVulkan 1.3仕様)参照。

enum GPUFilterMode {
    "nearest",
    "linear",
};

enum GPUMipmapFilterMode {
    "nearest",
    "linear",
};
"nearest"

テクスチャ座標に最も近いテクセルの値を返す。

"linear"

各次元で2テクセル選び、その値を線形補間して返す。

GPUCompareFunction は比較サンプラーの挙動指定。シェーダで比較サンプラー使用時、depth_refと取得テクセル値を比較し、その判定結果(合格1.0f/不合格0.0f)を生成。

比較後、テクスチャフィルタリング有効ならフィルタリングが行われ、判定結果同士が混合され[0, 1]範囲の値となる。フィルタリングは通常通り動作すべきだが、精度低下や混合なしの可能性もある。

enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always",
};
"never"

比較判定は常に不合格。

"less"

与えられた値がサンプル値より小さい場合に合格。

"equal"

与えられた値がサンプル値と等しい場合に合格。

"less-equal"

与えられた値がサンプル値以下の場合に合格。

"greater"

与えられた値がサンプル値より大きい場合に合格。

"not-equal"

与えられた値がサンプル値と異なる場合に合格。

"greater-equal"

与えられた値がサンプル値以上の場合に合格。

"always"

比較判定は常に合格。

7.1.2. サンプラーの作成

createSampler(descriptor)

GPUSamplerを作成します。

呼び出し元: GPUDevice this.

引数:

GPUDevice.createSampler(descriptor) メソッドの引数。
パラメータ Nullable Optional 説明
descriptor GPUSamplerDescriptor 作成するGPUSamplerの説明。

戻り値: GPUSampler

現行標準タイムライン手順:

  1. s! 新しいWebGPUオブジェクトの生成(this, GPUSampler, descriptor)とする。

  2. thisデバイスタイムラインinitialization stepsを発行。

  3. sを返す。

デバイスタイムライン initialization steps:
  1. 以下の条件が満たされない場合検証エラー生成sの無効化、return。

  2. s.[[descriptor]]descriptorを設定。

  3. s.[[isComparison]] を、s.[[descriptor]]compare 属性がnullまたは未定義ならfalse、それ以外ならtrueに設定。

  4. s.[[isFiltering]] を、minFiltermagFiltermipmapFilter のいずれも "linear"でなければfalse、いずれかが"linear"ならtrueに設定。

テクスチャ座標繰り返し&三線形フィルタのGPUSampler生成例:
const sampler = gpuDevice.createSampler({
    addressModeU: 'repeat',
    addressModeV: 'repeat',
    magFilter: 'linear',
    minFilter: 'linear',
    mipmapFilter: 'linear',
});

8. Resource Binding

8.1. GPUBindGroupLayout

A GPUBindGroupLayout defines the interface between a set of resources bound in a GPUBindGroup and their accessibility in shader stages.

[Exposed=(Window, Worker), SecureContext]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

GPUBindGroupLayout has the following immutable properties:

[[descriptor]], of type GPUBindGroupLayoutDescriptor, readonly

8.1.1. Bind Group Layout Creation

A GPUBindGroupLayout is created via GPUDevice.createBindGroupLayout().

dictionary GPUBindGroupLayoutDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutEntry> entries;
};

GPUBindGroupLayoutDescriptor dictionaries have the following members:

entries, of type sequence<GPUBindGroupLayoutEntry>

A list of entries describing the shader resource bindings for a bind group.

A GPUBindGroupLayoutEntry describes a single shader resource binding to be included in a GPUBindGroupLayout.

dictionary GPUBindGroupLayoutEntry {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;

    GPUBufferBindingLayout buffer;
    GPUSamplerBindingLayout sampler;
    GPUTextureBindingLayout texture;
    GPUStorageTextureBindingLayout storageTexture;
    GPUExternalTextureBindingLayout externalTexture;
};

GPUBindGroupLayoutEntry dictionaries have the following members:

binding, of type GPUIndex32

A unique identifier for a resource binding within the GPUBindGroupLayout, corresponding to a GPUBindGroupEntry.binding and a @binding attribute in the GPUShaderModule.

visibility, of type GPUShaderStageFlags

A bitset of the members of GPUShaderStage. Each set bit indicates that a GPUBindGroupLayoutEntry’s resource will be accessible from the associated shader stage.

buffer, of type GPUBufferBindingLayout
sampler, of type GPUSamplerBindingLayout
texture, of type GPUTextureBindingLayout
storageTexture, of type GPUStorageTextureBindingLayout
externalTexture, of type GPUExternalTextureBindingLayout

Exactly one of these members must be set, indicating the binding type. The contents of the member specify options specific to that type.

The corresponding resource in createBindGroup() requires the corresponding binding resource type for this binding.

typedef [EnforceRange] unsigned long GPUShaderStageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUShaderStage {
    const GPUFlagsConstant VERTEX   = 0x1;
    const GPUFlagsConstant FRAGMENT = 0x2;
    const GPUFlagsConstant COMPUTE  = 0x4;
};

GPUShaderStage contains the following flags, which describe which shader stages a corresponding GPUBindGroupEntry for this GPUBindGroupLayoutEntry will be visible to:

VERTEX

The bind group entry will be accessible to vertex shaders.

FRAGMENT

The bind group entry will be accessible to fragment shaders.

COMPUTE

The bind group entry will be accessible to compute shaders.

The binding member of a GPUBindGroupLayoutEntry is determined by which member of the GPUBindGroupLayoutEntry is defined: buffer, sampler, texture, storageTexture, or externalTexture. Only one may be defined for any given GPUBindGroupLayoutEntry. Each member has an associated GPUBindingResource type and each binding type has an associated internal usage, given by this table:

Binding member Resource type Binding type
Binding usage
buffer GPUBufferBinding
(or GPUBuffer as shorthand)
"uniform" constant
"storage" storage
"read-only-storage" storage-read
sampler GPUSampler "filtering" constant
"non-filtering"
"comparison"
texture GPUTextureView
(or GPUTexture as shorthand)
"float" constant
"unfilterable-float"
"depth"
"sint"
"uint"
storageTexture GPUTextureView
(or GPUTexture as shorthand)
"write-only" storage
"read-write"
"read-only" storage-read
externalTexture GPUExternalTexture
or GPUTextureView
(or GPUTexture as shorthand)
constant
The list of GPUBindGroupLayoutEntry values entries exceeds the binding slot limits of supported limits limits if the number of slots used toward a limit exceeds the supported value in limits. Each entry may use multiple slots toward multiple limits.

Device timeline steps:

  1. For each entry in entries, if:

    entry.buffer?.type is "uniform" and entry.buffer?.hasDynamicOffset is true

    Consider 1 maxDynamicUniformBuffersPerPipelineLayout slot to be used.

    entry.buffer?.type is "storage" and entry.buffer?.hasDynamicOffset is true

    Consider 1 maxDynamicStorageBuffersPerPipelineLayout slot to be used.

  2. For each shader stage stage in « VERTEX, FRAGMENT, COMPUTE »:

    1. For each entry in entries for which entry.visibility contains stage, if:

      entry.buffer?.type is "uniform"

      Consider 1 maxUniformBuffersPerShaderStage slot to be used.

      entry.buffer?.type is "storage" or "read-only-storage"

      Consider 1 maxStorageBuffersPerShaderStage slot to be used.

      entry.sampler is provided

      Consider 1 maxSamplersPerShaderStage slot to be used.

      entry.texture is provided

      Consider 1 maxSampledTexturesPerShaderStage slot to be used.

      entry.storageTexture is provided

      Consider 1 maxStorageTexturesPerShaderStage slot to be used.

      entry.externalTexture is provided

      Consider 4 maxSampledTexturesPerShaderStage slot, 1 maxSamplersPerShaderStage slot, and 1 maxUniformBuffersPerShaderStage slot to be used.

      Note: See GPUExternalTexture for an explanation of this behavior.

enum GPUBufferBindingType {
    "uniform",
    "storage",
    "read-only-storage",
};

dictionary GPUBufferBindingLayout {
    GPUBufferBindingType type = "uniform";
    boolean hasDynamicOffset = false;
    GPUSize64 minBindingSize = 0;
};

GPUBufferBindingLayout dictionaries have the following members:

type, of type GPUBufferBindingType, defaulting to "uniform"

Indicates the type required for buffers bound to this bindings.

hasDynamicOffset, of type boolean, defaulting to false

Indicates whether this binding requires a dynamic offset.

minBindingSize, of type GPUSize64, defaulting to 0

Indicates the minimum size of a buffer binding used with this bind point.

Bindings are always validated against this size in createBindGroup().

If this is not 0, pipeline creation additionally validates that this value ≥ the minimum buffer binding size of the variable.

If this is 0, it is ignored by pipeline creation, and instead draw/dispatch commands validate that each binding in the GPUBindGroup satisfies the minimum buffer binding size of the variable.

Note: Similar execution-time validation is theoretically possible for other binding-related fields specified for early validation, like sampleType and format, which currently can only be validated in pipeline creation. However, such execution-time validation could be costly or unnecessarily complex, so it is available only for minBindingSize which is expected to have the most ergonomic impact.

enum GPUSamplerBindingType {
    "filtering",
    "non-filtering",
    "comparison",
};

dictionary GPUSamplerBindingLayout {
    GPUSamplerBindingType type = "filtering";
};

GPUSamplerBindingLayout dictionaries have the following members:

type, of type GPUSamplerBindingType, defaulting to "filtering"

Indicates the required type of a sampler bound to this bindings.

enum GPUTextureSampleType {
    "float",
    "unfilterable-float",
    "depth",
    "sint",
    "uint",
};

dictionary GPUTextureBindingLayout {
    GPUTextureSampleType sampleType = "float";
    GPUTextureViewDimension viewDimension = "2d";
    boolean multisampled = false;
};

GPUTextureBindingLayout dictionaries have the following members:

sampleType, of type GPUTextureSampleType, defaulting to "float"

Indicates the type required for texture views bound to this binding.

viewDimension, of type GPUTextureViewDimension, defaulting to "2d"

Indicates the required dimension for texture views bound to this binding.

multisampled, of type boolean, defaulting to false

Indicates whether or not texture views bound to this binding must be multisampled.

enum GPUStorageTextureAccess {
    "write-only",
    "read-only",
    "read-write",
};

dictionary GPUStorageTextureBindingLayout {
    GPUStorageTextureAccess access = "write-only";
    required GPUTextureFormat format;
    GPUTextureViewDimension viewDimension = "2d";
};

GPUStorageTextureBindingLayout dictionaries have the following members:

access, of type GPUStorageTextureAccess, defaulting to "write-only"

The access mode for this binding, indicating readability and writability.

format, of type GPUTextureFormat

The required format of texture views bound to this binding.

viewDimension, of type GPUTextureViewDimension, defaulting to "2d"

Indicates the required dimension for texture views bound to this binding.

dictionary GPUExternalTextureBindingLayout {
};

A GPUBindGroupLayout object has the following device timeline properties:

[[entryMap]], of type ordered map<GPUSize32, GPUBindGroupLayoutEntry>, readonly

The map of binding indices pointing to the GPUBindGroupLayoutEntrys, which this GPUBindGroupLayout describes.

[[dynamicOffsetCount]], of type GPUSize32, readonly

The number of buffer bindings with dynamic offsets in this GPUBindGroupLayout.

[[exclusivePipeline]], of type GPUPipelineBase?, readonly

The pipeline that created this GPUBindGroupLayout, if it was created as part of a default pipeline layout. If not null, GPUBindGroups created with this GPUBindGroupLayout can only be used with the specified GPUPipelineBase.

createBindGroupLayout(descriptor)

Creates a GPUBindGroupLayout.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createBindGroupLayout(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUBindGroupLayoutDescriptor Description of the GPUBindGroupLayout to create.

Returns: GPUBindGroupLayout

Content timeline steps:

  1. For each GPUBindGroupLayoutEntry entry in descriptor.entries:

    1. If entry.storageTexture is provided:

      1. ? Validate texture format required features for entry.storageTexture.format with this.[[device]].

  2. Let layout be ! create a new WebGPU object(this, GPUBindGroupLayout, descriptor).

  3. Issue the initialization steps on the Device timeline of this.

  4. Return layout.

Device timeline initialization steps:
  1. If any of the following conditions are unsatisfied generate a validation error, invalidate layout and return.

  2. Set layout.[[descriptor]] to descriptor.

  3. Set layout.[[dynamicOffsetCount]] to the number of entries in descriptor where buffer is provided and buffer.hasDynamicOffset is true.

  4. Set layout.[[exclusivePipeline]] to null.

  5. For each GPUBindGroupLayoutEntry entry in descriptor.entries:

    1. Insert entry into layout.[[entryMap]] with the key of entry.binding.

8.1.2. Compatibility

Two GPUBindGroupLayout objects a and b are considered group-equivalent if and only if all of the following conditions are satisfied:

If bind groups layouts are group-equivalent they can be interchangeably used in all contents.

8.2. GPUBindGroup

A GPUBindGroup defines a set of resources to be bound together in a group and how the resources are used in shader stages.

[Exposed=(Window, Worker), SecureContext]
interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

GPUBindGroup has the following device timeline properties:

[[layout]], of type GPUBindGroupLayout, readonly

The GPUBindGroupLayout associated with this GPUBindGroup.

[[entries]], of type sequence<GPUBindGroupEntry>, readonly

The set of GPUBindGroupEntrys this GPUBindGroup describes.

[[usedResources]], of type usage scope, readonly

The set of buffer and texture subresources used by this bind group, associated with lists of the internal usage flags.

The bound buffer ranges of a GPUBindGroup bindGroup, given list<GPUBufferDynamicOffset> dynamicOffsets, are computed as follows:
  1. Let result be a new set<(GPUBindGroupLayoutEntry, GPUBufferBinding)>.

  2. Let dynamicOffsetIndex be 0.

  3. For each GPUBindGroupEntry bindGroupEntry in bindGroup.[[entries]], sorted by bindGroupEntry.binding:

    1. Let bindGroupLayoutEntry be bindGroup.[[layout]].[[entryMap]][bindGroupEntry.binding].

    2. If bindGroupLayoutEntry.buffer is not provided, continue.

    3. Let bound be get as buffer binding(bindGroupEntry.resource).

    4. If bindGroupLayoutEntry.buffer.hasDynamicOffset:

      1. Increment bound.offset by dynamicOffsets[dynamicOffsetIndex].

      2. Increment dynamicOffsetIndex by 1.

    5. Append (bindGroupLayoutEntry, bound) to result.

  4. Return result.

8.2.1. Bind Group Creation

A GPUBindGroup is created via GPUDevice.createBindGroup().

dictionary GPUBindGroupDescriptor
         : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupEntry> entries;
};

GPUBindGroupDescriptor dictionaries have the following members:

layout, of type GPUBindGroupLayout

The GPUBindGroupLayout the entries of this bind group will conform to.

entries, of type sequence<GPUBindGroupEntry>

A list of entries describing the resources to expose to the shader for each binding described by the layout.

typedef (GPUSampler or
         GPUTexture or
         GPUTextureView or
         GPUBuffer or
         GPUBufferBinding or
         GPUExternalTexture) GPUBindingResource;

dictionary GPUBindGroupEntry {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};

A GPUBindGroupEntry describes a single resource to be bound in a GPUBindGroup, and has the following members:

binding, of type GPUIndex32

A unique identifier for a resource binding within the GPUBindGroup, corresponding to a GPUBindGroupLayoutEntry.binding and a @binding attribute in the GPUShaderModule.

resource, of type GPUBindingResource

The resource to bind, which may be a GPUSampler, GPUTexture, GPUTextureView, GPUBuffer, GPUBufferBinding, or GPUExternalTexture.

GPUBindGroupEntry has the following device timeline properties:

[[prevalidatedSize]], of type boolean

Whether or not this binding entry had its buffer size validated at time of creation.

dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

A GPUBufferBinding describes a buffer and optional range to bind as a resource, and has the following members:

buffer, of type GPUBuffer

The GPUBuffer to bind.

offset, of type GPUSize64, defaulting to 0

The offset, in bytes, from the beginning of buffer to the beginning of the range exposed to the shader by the buffer binding.

size, of type GPUSize64

The size, in bytes, of the buffer binding. If not provided, specifies the range starting at offset and ending at the end of buffer.

createBindGroup(descriptor)

Creates a GPUBindGroup.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createBindGroup(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUBindGroupDescriptor Description of the GPUBindGroup to create.

Returns: GPUBindGroup

Content timeline steps:

  1. Let bindGroup be ! create a new WebGPU object(this, GPUBindGroup, descriptor).

  2. Issue the initialization steps on the Device timeline of this.

  3. Return bindGroup.

Device timeline initialization steps:
  1. Let limits be this.[[device]].[[limits]].

  2. If any of the following conditions are unsatisfied generate a validation error, invalidate bindGroup and return.

    For each GPUBindGroupEntry bindingDescriptor in descriptor.entries:

  3. Let bindGroup.[[layout]] = descriptor.layout.

  4. Let bindGroup.[[entries]] = descriptor.entries.

  5. Let bindGroup.[[usedResources]] = {}.

  6. For each GPUBindGroupEntry bindingDescriptor in descriptor.entries:

    1. Let internalUsage be the binding usage for layoutBinding.

    2. Each subresource seen by resource is added to [[usedResources]] as internalUsage.

    3. Let bindingDescriptor.[[prevalidatedSize]] be false if the defined binding member for layoutBinding is buffer and layoutBinding.buffer.minBindingSize is 0, and true otherwise.

get as texture view(resource)

Arguments:

Returns: GPUTextureView

  1. Assert resource is either a GPUTexture or a GPUTextureView.

  2. If resource is a:

    GPUTexture
    1. Return resource.createView().

    GPUTextureView
    1. Return resource.

get as buffer binding(resource)

Arguments:

Returns: GPUBufferBinding

  1. Assert resource is either a GPUBuffer or a GPUBufferBinding.

  2. If resource is a:

    GPUBuffer
    1. Let bufferBinding a new GPUBufferBinding.

    2. Set bufferBinding.buffer to resource.

    3. Return bufferBinding.

    GPUBufferBinding
    1. Return resource.

effective buffer binding size(binding)

Arguments:

Returns: GPUSize64

  1. If binding.size is not provided:

    1. Return max(0, binding.buffer.size - binding.offset);

  2. Return binding.size.

Two GPUBufferBinding objects a and b are considered buffer-binding-aliasing if and only if all of the following are true:

Note: When doing this calculation, any dynamic offsets have already been applied to the ranges.

8.3. GPUPipelineLayout

A GPUPipelineLayout defines the mapping between resources of all GPUBindGroup objects set up during command encoding in setBindGroup(), and the shaders of the pipeline set by GPURenderCommandsMixin.setPipeline or GPUComputePassEncoder.setPipeline.

The full binding address of a resource can be defined as a trio of:

  1. shader stage mask, to which the resource is visible

  2. bind group index

  3. binding number

The components of this address can also be seen as the binding space of a pipeline. A GPUBindGroup (with the corresponding GPUBindGroupLayout) covers that space for a fixed bind group index. The contained bindings need to be a superset of the resources used by the shader at this bind group index.

[Exposed=(Window, Worker), SecureContext]
interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

GPUPipelineLayout has the following device timeline properties:

[[bindGroupLayouts]], of type list<GPUBindGroupLayout>, readonly

The GPUBindGroupLayout objects provided at creation in GPUPipelineLayoutDescriptor.bindGroupLayouts.

Note: using the same GPUPipelineLayout for many GPURenderPipeline or GPUComputePipeline pipelines guarantees that the user agent doesn’t need to rebind any resources internally when there is a switch between these pipelines.

GPUComputePipeline object X was created with GPUPipelineLayout.bindGroupLayouts A, B, C. GPUComputePipeline object Y was created with GPUPipelineLayout.bindGroupLayouts A, D, C. Supposing the command encoding sequence has two dispatches:
  1. setBindGroup(0, ...)

  2. setBindGroup(1, ...)

  3. setBindGroup(2, ...)

  4. setPipeline(X)

  5. dispatchWorkgroups()

  6. setBindGroup(1, ...)

  7. setPipeline(Y)

  8. dispatchWorkgroups()

In this scenario, the user agent would have to re-bind the group slot 2 for the second dispatch, even though neither the GPUBindGroupLayout at index 2 of GPUPipelineLayout.bindGroupLayouts, or the GPUBindGroup at slot 2, change.

Note: the expected usage of the GPUPipelineLayout is placing the most common and the least frequently changing bind groups at the "bottom" of the layout, meaning lower bind group slot numbers, like 0 or 1. The more frequently a bind group needs to change between draw calls, the higher its index should be. This general guideline allows the user agent to minimize state changes between draw calls, and consequently lower the CPU overhead.

8.3.1. Pipeline Layout Creation

A GPUPipelineLayout is created via GPUDevice.createPipelineLayout().

dictionary GPUPipelineLayoutDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout?> bindGroupLayouts;
};

GPUPipelineLayoutDescriptor dictionaries define all the GPUBindGroupLayouts used by a pipeline, and have the following members:

bindGroupLayouts, of type sequence<GPUBindGroupLayout?>

A list of optional GPUBindGroupLayouts the pipeline will use. Each element corresponds to a @group attribute in the GPUShaderModule, with the Nth element corresponding with @group(N).

createPipelineLayout(descriptor)

Creates a GPUPipelineLayout.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createPipelineLayout(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUPipelineLayoutDescriptor Description of the GPUPipelineLayout to create.

Returns: GPUPipelineLayout

Content timeline steps:

  1. Let pl be ! create a new WebGPU object(this, GPUPipelineLayout, descriptor).

  2. Issue the initialization steps on the Device timeline of this.

  3. Return pl.

Device timeline initialization steps:
  1. Let limits be this.[[device]].[[limits]].

  2. Let bindGroupLayouts be a list of null GPUBindGroupLayouts with size equal to limits.maxBindGroups.

  3. For each bindGroupLayout at index i in descriptor.bindGroupLayouts:

    1. If bindGroupLayout is not null and bindGroupLayout.[[descriptor]].entries is not empty:

      1. Set bindGroupLayouts[i] to bindGroupLayout.

  4. Let allEntries be the result of concatenating bgl.[[descriptor]].entries for all non-null bgl in bindGroupLayouts.

  5. If any of the following conditions are unsatisfied generate a validation error, invalidate pl and return.

  6. Set the pl.[[bindGroupLayouts]] to bindGroupLayouts.

Note: two GPUPipelineLayout objects are considered equivalent for any usage if their internal [[bindGroupLayouts]] sequences contain GPUBindGroupLayout objects that are group-equivalent.

8.4. Example

Create a GPUBindGroupLayout that describes a binding with a uniform buffer, a texture, and a sampler. Then create a GPUBindGroup and a GPUPipelineLayout using the GPUBindGroupLayout.
const bindGroupLayout = gpuDevice.createBindGroupLayout({
    entries: [{
        binding: 0,
        visibility: GPUShaderStage.VERTEX | GPUShaderStage.FRAGMENT,
        buffer: {}
    }, {
        binding: 1,
        visibility: GPUShaderStage.FRAGMENT,
        texture: {}
    }, {
        binding: 2,
        visibility: GPUShaderStage.FRAGMENT,
        sampler: {}
    }]
});

const bindGroup = gpuDevice.createBindGroup({
    layout: bindGroupLayout,
    entries: [{
        binding: 0,
        resource: { buffer: buffer },
    }, {
        binding: 1,
        resource: texture
    }, {
        binding: 2,
        resource: sampler
    }]
});

const pipelineLayout = gpuDevice.createPipelineLayout({
    bindGroupLayouts: [bindGroupLayout]
});

9. Shader Modules

9.1. GPUShaderModule

[Exposed=(Window, Worker), SecureContext]
interface GPUShaderModule {
    Promise<GPUCompilationInfo> getCompilationInfo();
};
GPUShaderModule includes GPUObjectBase;

GPUShaderModule is a reference to an internal shader module object.

9.1.1. Shader Module Creation

dictionary GPUShaderModuleDescriptor
         : GPUObjectDescriptorBase {
    required USVString code;
    sequence<GPUShaderModuleCompilationHint> compilationHints = [];
};
code, of type USVString

The WGSL source code for the shader module.

compilationHints, of type sequence<GPUShaderModuleCompilationHint>, defaulting to []

A list of GPUShaderModuleCompilationHints.

Any hint provided by an application should contain information about one entry point of a pipeline that will eventually be created from the entry point.

Implementations should use any information present in the GPUShaderModuleCompilationHint to perform as much compilation as is possible within createShaderModule().

Aside from type-checking, these hints are not validated in any way.

NOTE:
Supplying information in compilationHints does not have any observable effect, other than performance. It may be detrimental to performance to provide hints for pipelines that never end up being created.

Because a single shader module can hold multiple entry points, and multiple pipelines can be created from a single shader module, it can be more performant for an implementation to do as much compilation as possible once in createShaderModule() rather than multiple times in the multiple calls to createComputePipeline() or createRenderPipeline().

Hints are only applied to the entry points they explicitly name. Unlike GPUProgrammableStage.entryPoint, there is no default, even if only one entry point is present in the module.

Note: Hints are not validated in an observable way, but user agents may surface identifiable errors (like unknown entry point names or incompatible pipeline layouts) to developers, for example in the browser developer console.

createShaderModule(descriptor)

Creates a GPUShaderModule.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createShaderModule(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUShaderModuleDescriptor Description of the GPUShaderModule to create.

Returns: GPUShaderModule

Content timeline steps:

  1. Let sm be ! create a new WebGPU object(this, GPUShaderModule, descriptor).

  2. Issue the initialization steps on the Device timeline of this.

  3. Return sm.

Device timeline initialization steps:
  1. Let error be any error that results from shader module creation with the WGSL source descriptor.code, or null if no errors occured.

  2. If any of the following requirements are unmet, generate a validation error, invalidate sm, and return.

    Note: Uncategorized errors cannot arise from shader module creation. Implementations which detect such errors during shader module creation must behave as if the shader module is valid, and defer surfacing the error until pipeline creation.

NOTE:
User agents should not include detailed compiler error messages or shader text in the message text of validation errors arising here: these details are accessible via getCompilationInfo(). User agents should surface human-readable, formatted error details to developers for easier debugging (for example as a warning in the browser developer console, expandable to show full shader source).

As shader compilation errors should be rare in production applications, user agents could choose to surface them to developers regardless of error handling (GPU error scopes or uncapturederror event handlers), e.g. as an expandable warning. If not, they should provide and document another way for developers to access human-readable error details, for example by adding a checkbox to show errors unconditionally, or by showing human-readable details when logging a GPUCompilationInfo object to the console.

Create a GPUShaderModule from WGSL code:
// A simple vertex and fragment shader pair that will fill the viewport with red.
const shaderSource = `
    var<private> pos : array<vec2<f32>, 3> = array<vec2<f32>, 3>(
        vec2(-1.0, -1.0), vec2(-1.0, 3.0), vec2(3.0, -1.0));

    @vertex
    fn vertexMain(@builtin(vertex_index) vertexIndex : u32) -> @builtin(position) vec4<f32> {
        return vec4(pos[vertexIndex], 1.0, 1.0);
    }

    @fragment
    fn fragmentMain() -> @location(0) vec4<f32> {
        return vec4(1.0, 0.0, 0.0, 1.0);
    }
`;

const shaderModule = gpuDevice.createShaderModule({
    code: shaderSource,
});
9.1.1.1. Shader Module Compilation Hints

Shader module compilation hints are optional, additional information indicating how a given GPUShaderModule entry point is intended to be used in the future. For some implementations this information may aid in compiling the shader module earlier, potentially increasing performance.

dictionary GPUShaderModuleCompilationHint {
    required USVString entryPoint;
    (GPUPipelineLayout or GPUAutoLayoutMode) layout;
};
layout, of type (GPUPipelineLayout or GPUAutoLayoutMode)

A GPUPipelineLayout that the GPUShaderModule may be used with in a future createComputePipeline() or createRenderPipeline() call. If set to "auto" the layout will be the default pipeline layout for the entry point associated with this hint will be used.

NOTE:
If possible, authors should be supplying the same information to createShaderModule() and createComputePipeline() / createRenderPipeline().

If an application is unable to provide hint information at the time of calling createShaderModule(), it should usually not delay calling createShaderModule(), but instead just omit the unknown information from the compilationHints sequence or the individual members of GPUShaderModuleCompilationHint. Omitting this information may cause compilation to be deferred to createComputePipeline() / createRenderPipeline().

If an author is not confident that the hint information passed to createShaderModule() will match the information later passed to createComputePipeline() / createRenderPipeline() with that same module, they should avoid passing that information to createShaderModule(), as passing mismatched information to createShaderModule() may cause unnecessary compilations to occur.

9.1.2. Shader Module Compilation Information

enum GPUCompilationMessageType {
    "error",
    "warning",
    "info",
};

[Exposed=(Window, Worker), Serializable, SecureContext]
interface GPUCompilationMessage {
    readonly attribute DOMString message;
    readonly attribute GPUCompilationMessageType type;
    readonly attribute unsigned long long lineNum;
    readonly attribute unsigned long long linePos;
    readonly attribute unsigned long long offset;
    readonly attribute unsigned long long length;
};

[Exposed=(Window, Worker), Serializable, SecureContext]
interface GPUCompilationInfo {
    readonly attribute FrozenArray<GPUCompilationMessage> messages;
};

A GPUCompilationMessage is an informational, warning, or error message generated by the GPUShaderModule compiler. The messages are intended to be human readable to help developers diagnose issues with their shader code. Each message may correspond to either a single point in the shader code, a substring of the shader code, or may not correspond to any specific point in the code at all.

GPUCompilationMessage has the following attributes:

message, of type DOMString, readonly

The human-readable, localizable text for this compilation message.

Note: The message should follow the best practices for language and direction information. This includes making use of any future standards which may emerge regarding the reporting of string language and direction metadata.

Editorial note: At the time of this writing, no language/direction recommendation is available that provides compatibility and consistency with legacy APIs, but when there is, adopt it formally.

type, of type GPUCompilationMessageType, readonly

The severity level of the message.

If the type is "error", it corresponds to a shader-creation error.

lineNum, of type unsigned long long, readonly

The line number in the shader code the message corresponds to. Value is one-based, such that a lineNum of 1 indicates the first line of the shader code. Lines are delimited by line breaks.

If the message corresponds to a substring this points to the line on which the substring begins. Must be 0 if the message does not correspond to any specific point in the shader code.

linePos, of type unsigned long long, readonly

The offset, in UTF-16 code units, from the beginning of line lineNum of the shader code to the point or beginning of the substring that the message corresponds to. Value is one-based, such that a linePos of 1 indicates the first code unit of the line.

If message corresponds to a substring this points to the first UTF-16 code unit of the substring. Must be 0 if the message does not correspond to any specific point in the shader code.

offset, of type unsigned long long, readonly

The offset from the beginning of the shader code in UTF-16 code units to the point or beginning of the substring that message corresponds to. Must reference the same position as lineNum and linePos. Must be 0 if the message does not correspond to any specific point in the shader code.

length, of type unsigned long long, readonly

The number of UTF-16 code units in the substring that message corresponds to. If the message does not correspond with a substring then length must be 0.

Note: GPUCompilationMessage.lineNum and GPUCompilationMessage.linePos are one-based since the most common use for them is expected to be printing human readable messages that can be correlated with the line and column numbers shown in many text editors.

Note: GPUCompilationMessage.offset and GPUCompilationMessage.length are appropriate to pass to substr() in order to retrieve the substring of the shader code the message corresponds to.

getCompilationInfo()

Returns any messages generated during the GPUShaderModule’s compilation.

The locations, order, and contents of messages are implementation-defined In particular, messages may not be ordered by lineNum.

Called on: GPUShaderModule this

Returns: Promise<GPUCompilationInfo>

Content timeline steps:

  1. Let contentTimeline be the current Content timeline.

  2. Let promise be a new promise.

  3. Issue the synchronization steps on the Device timeline of this.

  4. Return promise.

Device timeline synchronization steps:
  1. Let event occur upon the (successful or unsuccessful) completion of shader module creation for this.

  2. Listen for timeline event event on this.[[device]], handled by the subsequent steps on contentTimeline.

Content timeline steps:
  1. Let info be a new GPUCompilationInfo.

  2. Let messages be a list of any errors, warnings, or informational messages generated during shader module creation for this, or the empty list [] if the device was lost.

  3. For each message in messages:

    1. Let m be a new GPUCompilationMessage.

    2. Set m.message to be the text of message.

    3. If message is a shader-creation error:

      Set m.type to "error"

      If message is a warning:

      Set m.type to "warning"

      Otherwise:

      Set m.type to "info"

    4. If message is associated with a specific substring or position within the shader code:
      1. Set m.lineNum to the one-based number of the first line that the message refers to.

      2. Set m.linePos to the one-based number of the first UTF-16 code units on m.lineNum that the message refers to, or 1 if the message refers to the entire line.

      3. Set m.offset to the number of UTF-16 code units from the beginning of the shader to beginning of the substring or position that message refers to.

      4. Set m.length the length of the substring in UTF-16 code units that message refers to, or 0 if message refers to a position

      Otherwise:
      1. Set m.lineNum to 0.

      2. Set m.linePos to 0.

      3. Set m.offset to 0.

      4. Set m.length to 0.

    5. Append m to info.messages.

  4. Resolve promise with info.

10. Pipelines

A pipeline, be it GPUComputePipeline or GPURenderPipeline, represents the complete function done by a combination of the GPU hardware, the driver, and the user agent, that process the input data in the shape of bindings and vertex buffers, and produces some output, like the colors in the output render targets.

Structurally, the pipeline consists of a sequence of programmable stages (shaders) and fixed-function states, such as the blending modes.

Note: Internally, depending on the target platform, the driver may convert some of the fixed-function states into shader code, and link it together with the shaders provided by the user. This linking is one of the reason the object is created as a whole.

This combination state is created as a single object (a GPUComputePipeline or GPURenderPipeline) and switched using one command (GPUComputePassEncoder.setPipeline() or GPURenderCommandsMixin.setPipeline() respectively).

There are two ways to create pipelines:

immediate pipeline creation

createComputePipeline() and createRenderPipeline() return a pipeline object which can be used immediately in a pass encoder.

When this fails, the pipeline object will be invalid and the call will generate either a validation error or an internal error.

Note: A handle object is returned immediately, but actual pipeline creation is not synchronous. If pipeline creation takes a long time, this can incur a stall in the device timeline at some point between the creation call and execution of the submit() in which it is first used. The point is unspecified, but most likely to be one of: at creation, at the first usage of the pipeline in setPipeline(), at the corresponding finish() of that GPUCommandEncoder or GPURenderBundleEncoder, or at submit() of that GPUCommandBuffer.

async pipeline creation

createComputePipelineAsync() and createRenderPipelineAsync() return a Promise which resolves to a pipeline object when creation of the pipeline has completed.

When this fails, the Promise rejects with a GPUPipelineError.

GPUPipelineError describes a pipeline creation failure.

[Exposed=(Window, Worker), SecureContext, Serializable]
interface GPUPipelineError : DOMException {
    constructor(optional DOMString message = "", GPUPipelineErrorInit options);
    readonly attribute GPUPipelineErrorReason reason;
};

dictionary GPUPipelineErrorInit {
    required GPUPipelineErrorReason reason;
};

enum GPUPipelineErrorReason {
    "validation",
    "internal",
};

GPUPipelineError constructor:

constructor()
Arguments:
Arguments for the GPUPipelineError.constructor() method.
Parameter Type Nullable Optional Description
message DOMString Error message of the base DOMException.
options GPUPipelineErrorInit Options specific to GPUPipelineError.

Content timeline steps:

  1. Set this.name to "GPUPipelineError".

  2. Set this.message to message.

  3. Set this.reason to options.reason.

GPUPipelineError has the following attributes:

reason, of type GPUPipelineErrorReason, readonly

A read-only slot-backed attribute exposing the type of error encountered in pipeline creation as a GPUPipelineErrorReason:

GPUPipelineError objects are serializable objects.

Their serialization steps, given value and serialized, are:
  1. Run the DOMException serialization steps given value and serialized.

Their deserialization steps, given value and serialized, are:
  1. Run the DOMException deserialization steps given value and serialized.

10.1. Base pipelines

enum GPUAutoLayoutMode {
    "auto",
};

dictionary GPUPipelineDescriptorBase
         : GPUObjectDescriptorBase {
    required (GPUPipelineLayout or GPUAutoLayoutMode) layout;
};
layout, of type (GPUPipelineLayout or GPUAutoLayoutMode)

The GPUPipelineLayout for this pipeline, or "auto" to generate the pipeline layout automatically.

Note: If "auto" is used the pipeline cannot share GPUBindGroups with any other pipelines.

interface mixin GPUPipelineBase {
    [NewObject] GPUBindGroupLayout getBindGroupLayout(unsigned long index);
};

GPUPipelineBase has the following device timeline properties:

[[layout]], of type GPUPipelineLayout

The definition of the layout of resources which can be used with this.

GPUPipelineBase has the following methods:

getBindGroupLayout(index)

Gets a GPUBindGroupLayout that is compatible with the GPUPipelineBase’s GPUBindGroupLayout at index.

Called on: GPUPipelineBase this

Arguments:

Arguments for the GPUPipelineBase.getBindGroupLayout(index) method.
Parameter Type Nullable Optional Description
index unsigned long Index into the pipeline layout’s [[bindGroupLayouts]] sequence.

Returns: GPUBindGroupLayout

Content timeline steps:

  1. Let layout be a new GPUBindGroupLayout object.

  2. Issue the initialization steps on the Device timeline of this.

  3. Return layout.

Device timeline initialization steps:
  1. Let limits be this.[[device]].[[limits]].

  2. If any of the following conditions are unsatisfied generate a validation error, invalidate layout and return.

  3. Initialize layout so it is a copy of this.[[layout]].[[bindGroupLayouts]][index].

    Note: GPUBindGroupLayout is only ever used by-value, not by-reference, so this is equivalent to returning the same internal object with a new WebGPU interface. A new GPUBindGroupLayout WebGPU interface is returned each time to avoid a round-trip between the Content timeline and the Device timeline.

10.1.1. Default pipeline layout

A GPUPipelineBase object that was created with a layout set to "auto" has a default layout created and used instead.

Note: Default layouts are provided as a convenience for simple pipelines, but use of explicit layouts is recommended in most cases. Bind groups created from default layouts cannot be used with other pipelines, and the structure of the default layout may change when altering shaders, causing unexpected bind group creation errors.

To create a default pipeline layout for GPUPipelineBase pipeline, run the following device timeline steps:

  1. Let groupCount be 0.

  2. Let groupDescs be a sequence of device.[[limits]].maxBindGroups new GPUBindGroupLayoutDescriptor objects.

  3. For each groupDesc in groupDescs:

    1. Set groupDesc.entries to an empty sequence.

  4. For each GPUProgrammableStage stageDesc in the descriptor used to create pipeline:

    1. Let shaderStage be the GPUShaderStageFlags for the shader stage at which stageDesc is used in pipeline.

    2. Let entryPoint be get the entry point(shaderStage, stageDesc). Assert entryPoint is not null.

    3. For each resource resource statically used by entryPoint:

      1. Let group be resource’s "group" decoration.

      2. Let binding be resource’s "binding" decoration.

      3. Let entry be a new GPUBindGroupLayoutEntry.

      4. Set entry.binding to binding.

      5. Set entry.visibility to shaderStage.

      6. If resource is for a sampler binding:

        1. Let samplerLayout be a new GPUSamplerBindingLayout.

        2. Set entry.sampler to samplerLayout.

      7. If resource is for a comparison sampler binding:

        1. Let samplerLayout be a new GPUSamplerBindingLayout.

        2. Set samplerLayout.type to "comparison".

        3. Set entry.sampler to samplerLayout.

      8. If resource is for a buffer binding:

        1. Let bufferLayout be a new GPUBufferBindingLayout.

        2. Set bufferLayout.minBindingSize to resource’s minimum buffer binding size.

        3. If resource is for a read-only storage buffer:

          1. Set bufferLayout.type to "read-only-storage".

        4. If resource is for a storage buffer:

          1. Set bufferLayout.type to "storage".

        5. Set entry.buffer to bufferLayout.

      9. If resource is for a sampled texture binding:

        1. Let textureLayout be a new GPUTextureBindingLayout.

        2. If resource is a depth texture binding:

          Else if the sampled type of resource is:

          f32 and there exists a static use of resource by stageDesc in a texture builtin function call that also uses a sampler

          Set textureLayout.sampleType to "float"

          f32 otherwise

          Set textureLayout.sampleType to "unfilterable-float"

          i32

          Set textureLayout.sampleType to "sint"

          u32

          Set textureLayout.sampleType to "uint"

        3. Set textureLayout.viewDimension to resource’s dimension.

        4. If resource is for a multisampled texture:

          1. Set textureLayout.multisampled to true.

        5. Set entry.texture to textureLayout.

      10. If resource is for a storage texture binding:

        1. Let storageTextureLayout be a new GPUStorageTextureBindingLayout.

        2. Set storageTextureLayout.format to resource’s format.

        3. Set storageTextureLayout.viewDimension to resource’s dimension.

        4. If the access mode is:

          read

          Set textureLayout.access to "read-only".

          write

          Set textureLayout.access to "write-only".

          read_write

          Set textureLayout.access to "read-write".

        5. Set entry.storageTexture to storageTextureLayout.

      11. Set groupCount to max(groupCount, group + 1).

      12. If groupDescs[group] has an entry previousEntry with binding equal to binding:

        1. If entry has different visibility than previousEntry:

          1. Add the bits set in entry.visibility into previousEntry.visibility

        2. If resource is for a buffer binding and entry has greater buffer.minBindingSize than previousEntry:

          1. Set previousEntry.buffer.minBindingSize to entry.buffer.minBindingSize.

        3. If resource is a sampled texture binding and entry has different texture.sampleType than previousEntry and both entry and previousEntry have texture.sampleType of either "float" or "unfilterable-float":

          1. Set previousEntry.texture.sampleType to "float".

        4. If any other property is unequal between entry and previousEntry:

          1. Return null (which will cause the creation of the pipeline to fail).

        5. If resource is a storage texture binding, entry.storageTexture.access is "read-write", previousEntry.storageTexture.access is "write-only", and previousEntry.storageTexture.format is compatible with STORAGE_BINDING and "read-write" according to the § 26.1.1 Plain color formats table:

          1. Set previousEntry.storageTexture.access to "read-write".

      13. Else

        1. Append entry to groupDescs[group].

  5. Let groupLayouts be a new list.

  6. For each i from 0 to groupCount - 1, inclusive:

    1. Let groupDesc be groupDescs[i].

    2. Let bindGroupLayout be the result of calling device.createBindGroupLayout()(groupDesc).

    3. Set bindGroupLayout.[[exclusivePipeline]] to pipeline.

    4. Append bindGroupLayout to groupLayouts.

  7. Let desc be a new GPUPipelineLayoutDescriptor.

  8. Set desc.bindGroupLayouts to groupLayouts.

  9. Return device.createPipelineLayout()(desc).

10.1.2. GPUProgrammableStage

A GPUProgrammableStage describes the entry point in the user-provided GPUShaderModule that controls one of the programmable stages of a pipeline. Entry point names follow the rules defined in WGSL identifier comparison.

dictionary GPUProgrammableStage {
    required GPUShaderModule module;
    USVString entryPoint;
    record<USVString, GPUPipelineConstantValue> constants = {};
};

typedef double GPUPipelineConstantValue; // May represent WGSL's bool, f32, i32, u32, and f16 if enabled.

GPUProgrammableStage has the following members:

module, of type GPUShaderModule

The GPUShaderModule containing the code that this programmable stage will execute.

entryPoint, of type USVString

The name of the function in module that this stage will use to perform its work.

NOTE: Since the entryPoint dictionary member is not required, methods which consume a GPUProgrammableStage must use the "get the entry point" algorithm to determine which entry point it refers to.

constants, of type record<USVString, GPUPipelineConstantValue>, defaulting to {}

Specifies the values of pipeline-overridable constants in the shader module module.

Each such pipeline-overridable constant is uniquely identified by a single pipeline-overridable constant identifier string, representing the pipeline constant ID of the constant if its declaration specifies one, and otherwise the constant’s identifier name.

The key of each key-value pair must equal the identifier string of one such constant, with the comparison performed according to the rules for WGSL identifier comparison. When the pipeline is executed, that constant will have the specified value.

Values are specified as GPUPipelineConstantValue, which is a double. They are converted to WGSL type of the pipeline-overridable constant (bool/i32/u32/f32/f16). If conversion fails, a validation error is generated.

Pipeline-overridable constants defined in WGSL:
@id(0)      override has_point_light: bool = true;  // Algorithmic control.
@id(1200)   override specular_param: f32 = 2.3;     // Numeric control.
@id(1300)   override gain: f32;                     // Must be overridden.
            override width: f32 = 0.0;              // Specifed at the API level
                                                    //   using the name "width".
            override depth: f32;                    // Specifed at the API level
                                                    //   using the name "depth".
                                                    //   Must be overridden.
            override height = 2 * depth;            // The default value
                                                    // (if not set at the API level),
                                                    // depends on another
                                                    // overridable constant.

Corresponding JavaScript code, providing only the overrides which are required (have no defaults):

{
    // ...
    constants: {
        1300: 2.0,  // "gain"
        depth: -1,  // "depth"
    }
}

Corresponding JavaScript code, overriding all constants:

{
    // ...
    constants: {
        0: false,   // "has_point_light"
        1200: 3.0,  // "specular_param"
        1300: 2.0,  // "gain"
        width: 20,  // "width"
        depth: -1,  // "depth"
        height: 15, // "height"
    }
}
To get the entry point(GPUShaderStage stage, GPUProgrammableStage descriptor), run the following device timeline steps:
  1. If descriptor.entryPoint is provided:

    1. If descriptor.module contains an entry point whose name equals descriptor.entryPoint, and whose shader stage equals stage, return that entry point.

      Otherwise, return null.

    Otherwise:

    1. If there is exactly one entry point in descriptor.module whose shader stage equals stage, return that entry point.

      Otherwise, return null.

validating GPUProgrammableStage(stage, descriptor, layout, device)

Arguments:

All of the requirements in the following steps must be met. If any are unmet, return false; otherwise, return true.

  1. descriptor.module must be valid to use with device.

  2. Let entryPoint be get the entry point(stage, descriptor).

  3. entryPoint must not be null.

  4. For each binding that is statically used by entryPoint:

  5. For each texture builtin function call in any of the functions in the shader stage rooted at entryPoint, if it uses a textureBinding of sampled texture or depth texture type together with a samplerBinding of sampler type (excluding sampler_comparison):

    1. Let texture be the GPUBindGroupLayoutEntry corresponding to textureBinding.

    2. Let sampler be the GPUBindGroupLayoutEntry corresponding to samplerBinding.

    3. If sampler.type is "filtering", then texture.sampleType must be "float".

    Note: "comparison" samplers can also only be used with "depth" textures, because they are the only texture type that can be bound to WGSL texture_depth_* bindings.

  6. For each keyvalue in descriptor.constants:

    1. key must equal the pipeline-overridable constant identifier string of some pipeline-overridable constant defined in the shader module descriptor.module by the rules defined in WGSL identifier comparison. The pipeline-overridable constant is not required to be statically used by entryPoint. Let the type of that constant be T.

    2. Converting the IDL value value to WGSL type T must not throw a TypeError.

  7. For each pipeline-overridable constant identifier string key which is statically used by entryPoint:

  8. Pipeline-creation program errors must not result from the rules of the [WGSL] specification.

validating shader binding(variable, layout)

Arguments:

Let bindGroup be the bind group index, and bindIndex be the binding index, of the shader binding declaration variable.

Return true if all of the following conditions are satisfied:

The minimum buffer binding size for a buffer binding variable var is computed as follows:
  1. Let T be the store type of var.

  2. If T is a runtime-sized array, or contains a runtime-sized array, replace that array<E> with array<E, 1>.

    Note: This ensures there’s always enough memory for one element, which allows array indices to be clamped to the length of the array resulting in an in-memory access.

  3. Return SizeOf(T).

Note: Enforcing this lower bound ensures reads and writes via the buffer variable only access memory locations within the bound region of the buffer.

A resource binding, pipeline-overridable constant, shader stage input, or shader stage output is considered to be statically used by an entry point if it is present in the interface of the shader stage for that entry point.

10.2. GPUComputePipeline

A GPUComputePipeline is a kind of pipeline that controls the compute shader stage, and can be used in GPUComputePassEncoder.

Compute inputs and outputs are all contained in the bindings, according to the given GPUPipelineLayout. The outputs correspond to buffer bindings with a type of "storage" and storageTexture bindings with a type of "write-only" or "read-write".

Stages of a compute pipeline:

  1. Compute shader

[Exposed=(Window, Worker), SecureContext]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;
GPUComputePipeline includes GPUPipelineBase;

10.2.1. Compute Pipeline Creation

A GPUComputePipelineDescriptor describes a compute pipeline. See § 23.1 Computing for additional details.

dictionary GPUComputePipelineDescriptor
         : GPUPipelineDescriptorBase {
    required GPUProgrammableStage compute;
};

GPUComputePipelineDescriptor has the following members:

compute, of type GPUProgrammableStage

Describes the compute shader entry point of the pipeline.

createComputePipeline(descriptor)

Creates a GPUComputePipeline using immediate pipeline creation.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createComputePipeline(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUComputePipelineDescriptor Description of the GPUComputePipeline to create.

Returns: GPUComputePipeline

Content timeline steps:

  1. Let pipeline be ! create a new WebGPU object(this, GPUComputePipeline, descriptor).

  2. Issue the initialization steps on the Device timeline of this.

  3. Return pipeline.

Device timeline initialization steps:
  1. Let layout be a new default pipeline layout for pipeline if descriptor.layout is "auto", and descriptor.layout otherwise.

  2. All of the requirements in the following steps must be met. If any are unmet, generate a validation error, invalidate pipeline and return.

    1. layout must be valid to use with this.

    2. validating GPUProgrammableStage(COMPUTE, descriptor.compute, layout, this) must succeed.

    3. Let entryPoint be get the entry point(COMPUTE, descriptor.compute).

      Assert entryPoint is not null.

    4. Let workgroupStorageUsed be the sum of roundUp(16, SizeOf(T)) over each type T of all variables with address space "workgroup" statically used by entryPoint.

      workgroupStorageUsed must be ≤ device.limits.maxComputeWorkgroupStorageSize.

    5. entryPoint must use ≤ device.limits.maxComputeInvocationsPerWorkgroup per workgroup.

    6. Each component of entryPoint’s workgroup_size attribute must be ≤ the corresponding component in [device.limits.maxComputeWorkgroupSizeX, device.limits.maxComputeWorkgroupSizeY, device.limits.maxComputeWorkgroupSizeZ].

  3. If any pipeline-creation uncategorized errors result from the implementation of pipeline creation, generate an internal error, invalidate pipeline and return.

    Note: Even if the implementation detected uncategorized errors in shader module creation, the error is surfaced here.

  4. Set pipeline.[[layout]] to layout.

createComputePipelineAsync(descriptor)

Creates a GPUComputePipeline using async pipeline creation. The returned Promise resolves when the created pipeline is ready to be used without additional delay.

If pipeline creation fails, the returned Promise rejects with an GPUPipelineError. (A GPUError is not dispatched to the device.)

Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createComputePipelineAsync(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUComputePipelineDescriptor Description of the GPUComputePipeline to create.

Returns: Promise<GPUComputePipeline>

Content timeline steps:

  1. Let contentTimeline be the current Content timeline.

  2. Let promise be a new promise.

  3. Issue the initialization steps on the Device timeline of this.

  4. Return promise.

Device timeline initialization steps:
  1. Let pipeline be a new GPUComputePipeline created as if this.createComputePipeline() was called with descriptor, except capturing any errors as error, rather than dispatching them to the device.

  2. Let event occur upon the (successful or unsuccessful) completion of pipeline creation for pipeline.

  3. Listen for timeline event event on this.[[device]], handled by the subsequent steps on the device timeline of this.

Device timeline steps:
  1. If pipeline is valid or this is lost:

    1. Issue the following steps on contentTimeline:

      Content timeline steps:
      1. Resolve promise with pipeline.

    2. Return.

    Note: No errors are generated from a device which is lost. See § 22 Errors & Debugging.

  2. If pipeline is invalid and error is an internal error, issue the following steps on contentTimeline, and return.

  3. If pipeline is invalid and error is a validation error, issue the following steps on contentTimeline, and return.

Creating a simple GPUComputePipeline:
const computePipeline = gpuDevice.createComputePipeline({
    layout: pipelineLayout,
    compute: {
        module: computeShaderModule,
        entryPoint: 'computeMain',
    }
});

10.3. GPURenderPipeline

A GPURenderPipeline is a kind of pipeline that controls the vertex and fragment shader stages, and can be used in GPURenderPassEncoder as well as GPURenderBundleEncoder.

Render pipeline inputs are:

Render pipeline outputs are:

A render pipeline is comprised of the following render stages:

  1. Vertex fetch, controlled by GPUVertexState.buffers

  2. Vertex shader, controlled by GPUVertexState

  3. Primitive assembly, controlled by GPUPrimitiveState

  4. Rasterization, controlled by GPUPrimitiveState, GPUDepthStencilState, and GPUMultisampleState

  5. Fragment shader, controlled by GPUFragmentState

  6. Stencil test and operation, controlled by GPUDepthStencilState

  7. Depth test and write, controlled by GPUDepthStencilState

  8. Output merging, controlled by GPUFragmentState.targets

[Exposed=(Window, Worker), SecureContext]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;
GPURenderPipeline includes GPUPipelineBase;

GPURenderPipeline has the following device timeline properties:

[[descriptor]], of type GPURenderPipelineDescriptor, readonly

The GPURenderPipelineDescriptor describing this pipeline.

All optional fields of GPURenderPipelineDescriptor are defined.

[[writesDepth]], of type boolean, readonly

True if the pipeline writes to the depth component of the depth/stencil attachment

[[writesStencil]], of type boolean, readonly

True if the pipeline writes to the stencil component of the depth/stencil attachment

10.3.1. Render Pipeline Creation

A GPURenderPipelineDescriptor describes a render pipeline by configuring each of the render stages. See § 23.2 Rendering for additional details.

dictionary GPURenderPipelineDescriptor
         : GPUPipelineDescriptorBase {
    required GPUVertexState vertex;
    GPUPrimitiveState primitive = {};
    GPUDepthStencilState depthStencil;
    GPUMultisampleState multisample = {};
    GPUFragmentState fragment;
};

GPURenderPipelineDescriptor has the following members:

vertex, of type GPUVertexState

Describes the vertex shader entry point of the pipeline and its input buffer layouts.

primitive, of type GPUPrimitiveState, defaulting to {}

Describes the primitive-related properties of the pipeline.

depthStencil, of type GPUDepthStencilState

Describes the optional depth-stencil properties, including the testing, operations, and bias.

multisample, of type GPUMultisampleState, defaulting to {}

Describes the multi-sampling properties of the pipeline.

fragment, of type GPUFragmentState

Describes the fragment shader entry point of the pipeline and its output colors. If not provided, the § 23.2.8 No Color Output mode is enabled.

createRenderPipeline(descriptor)

Creates a GPURenderPipeline using immediate pipeline creation.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createRenderPipeline(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderPipelineDescriptor Description of the GPURenderPipeline to create.

Returns: GPURenderPipeline

Content timeline steps:

  1. If descriptor.fragment is provided:

    1. For each non-null colorState of descriptor.fragment.targets:

      1. ? Validate texture format required features of colorState.format with this.[[device]].

  2. If descriptor.depthStencil is provided:

    1. ? Validate texture format required features of descriptor.depthStencil.format with this.[[device]].

  3. Let pipeline be ! create a new WebGPU object(this, GPURenderPipeline, descriptor).

  4. Issue the initialization steps on the Device timeline of this.

  5. Return pipeline.

Device timeline initialization steps:
  1. Let layout be a new default pipeline layout for pipeline if descriptor.layout is "auto", and descriptor.layout otherwise.

  2. All of the requirements in the following steps must be met. If any are unmet, generate a validation error, invalidate pipeline, and return.

    1. layout must be valid to use with this.

    2. validating GPURenderPipelineDescriptor(descriptor, layout, this) must succeed.

    3. Let vertexBufferCount be the index of the last non-null entry in descriptor.vertex.buffers, plus 1; or 0 if there are none.

    4. layout.[[bindGroupLayouts]].size + vertexBufferCount must be ≤ this.[[device]].[[limits]].maxBindGroupsPlusVertexBuffers.

  3. If any pipeline-creation uncategorized errors result from the implementation of pipeline creation, generate an internal error, invalidate pipeline and return.

    Note: Even if the implementation detected uncategorized errors in shader module creation, the error is surfaced here.

  4. Set pipeline.[[descriptor]] to descriptor.

  5. Set pipeline.[[writesDepth]] to false.

  6. Set pipeline.[[writesStencil]] to false.

  7. Let depthStencil be descriptor.depthStencil.

  8. If depthStencil is not null:

    1. If depthStencil.depthWriteEnabled is provided:

      1. Set pipeline.[[writesDepth]] to depthStencil.depthWriteEnabled.

    2. If depthStencil.stencilWriteMask is not 0:

      1. Let stencilFront be depthStencil.stencilFront.

      2. Let stencilBack be depthStencil.stencilBack.

      3. Let cullMode be descriptor.primitive.cullMode.

      4. If cullMode is not "front", and any of stencilFront.passOp, stencilFront.depthFailOp, or stencilFront.failOp is not "keep":

        1. Set pipeline.[[writesStencil]] to true.

      5. If cullMode is not "back", and any of stencilBack.passOp, stencilBack.depthFailOp, or stencilBack.failOp is not "keep":

        1. Set pipeline.[[writesStencil]] to true.

  9. Set pipeline.[[layout]] to layout.

createRenderPipelineAsync(descriptor)

Creates a GPURenderPipeline using async pipeline creation. The returned Promise resolves when the created pipeline is ready to be used without additional delay.

If pipeline creation fails, the returned Promise rejects with an GPUPipelineError. (A GPUError is not dispatched to the device.)

Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createRenderPipelineAsync(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderPipelineDescriptor Description of the GPURenderPipeline to create.

Returns: Promise<GPURenderPipeline>

Content timeline steps:

  1. Let contentTimeline be the current Content timeline.

  2. Let promise be a new promise.

  3. Issue the initialization steps on the Device timeline of this.

  4. Return promise.

Device timeline initialization steps:
  1. Let pipeline be a new GPURenderPipeline created as if this.createRenderPipeline() was called with descriptor, except capturing any errors as error, rather than dispatching them to the device.

  2. Let event occur upon the (successful or unsuccessful) completion of pipeline creation for pipeline.

  3. Listen for timeline event event on this.[[device]], handled by the subsequent steps on the device timeline of this.

Device timeline steps:
  1. If pipeline is valid or this is lost:

    1. Issue the following steps on contentTimeline:

      Content timeline steps:
      1. Resolve promise with pipeline.

    2. Return.

    Note: No errors are generated from a device which is lost. See § 22 Errors & Debugging.

  2. If pipeline is invalid and error is an internal error, issue the following steps on contentTimeline, and return.

  3. If pipeline is invalid and error is a validation error, issue the following steps on contentTimeline, and return.

validating GPURenderPipelineDescriptor(descriptor, layout, device)

Arguments:

Device timeline steps:

  1. Return true if all of the following conditions are satisfied:

validating inter-stage interfaces(device, descriptor)

Arguments:

Returns: boolean

Device timeline steps:

  1. Let maxVertexShaderOutputVariables be device.limits.maxInterStageShaderVariables.

  2. Let maxVertexShaderOutputLocation be device.limits.maxInterStageShaderVariables - 1.

  3. If descriptor.primitive.topology is "point-list":

    1. Decrement maxVertexShaderOutputVariables by 1.

  4. If clip_distances is declared in the output of descriptor.vertex:

    1. Let clipDistancesSize be the array size of clip_distances.

    2. Decrement maxVertexShaderOutputVariables by ceil(clipDistancesSize / 4).

    3. Decrement maxVertexShaderOutputLocation by ceil(clipDistancesSize / 4).

  5. Return false if any of the following requirements are unmet:

    • There must be no more than maxVertexShaderOutputVariables user-defined outputs for descriptor.vertex.

    • The location of each user-defined output of descriptor.vertex must be ≤ maxVertexShaderOutputLocation.

  6. If descriptor.fragment is provided:

    1. Let maxFragmentShaderInputVariables be device.limits.maxInterStageShaderVariables.

    2. If any of the front_facing, sample_index, or sample_mask builtins are an input of descriptor.fragment:

      1. Decrement maxFragmentShaderInputVariables by 1.

    3. Return false if any of the following requirements are unmet:

      • For each user-defined input of descriptor.fragment there must be a user-defined output of descriptor.vertex that location, type, and interpolation of the input.

        Note: Vertex-only pipelines can have user-defined outputs in the vertex stage; their values will be discarded.

      • There must be no more than maxFragmentShaderInputVariables user-defined inputs for descriptor.fragment.

    4. Assert that the location of each user-defined input of descriptor.fragment is less than device.limits.maxInterStageShaderVariables. (This follows from the above rules.)

  7. Return true.

Creating a simple GPURenderPipeline:
const renderPipeline = gpuDevice.createRenderPipeline({
    layout: pipelineLayout,
    vertex: {
        module: shaderModule,
        entryPoint: 'vertexMain'
    },
    fragment: {
        module: shaderModule,
        entryPoint: 'fragmentMain',
        targets: [{
            format: 'bgra8unorm',
        }],
    }
});

10.3.2. Primitive State

dictionary GPUPrimitiveState {
    GPUPrimitiveTopology topology = "triangle-list";
    GPUIndexFormat stripIndexFormat;
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    // Requires "depth-clip-control" feature.
    boolean unclippedDepth = false;
};

GPUPrimitiveState has the following members, which describe how a GPURenderPipeline constructs and rasterizes primitives from its vertex inputs:

topology, of type GPUPrimitiveTopology, defaulting to "triangle-list"

The type of primitive to be constructed from the vertex inputs.

stripIndexFormat, of type GPUIndexFormat

For pipelines with strip topologies ("line-strip" or "triangle-strip"), this determines the index buffer format and primitive restart value ("uint16"/0xFFFF or "uint32"/0xFFFFFFFF). It is not allowed on pipelines with non-strip topologies.

Note: Some implementations require knowledge of the primitive restart value to compile pipeline state objects.

To use a strip-topology pipeline with an indexed draw call (drawIndexed() or drawIndexedIndirect()), this must be set, and it must match the index buffer format used with the draw call (set in setIndexBuffer()).

See § 23.2.3 Primitive Assembly for additional details.

frontFace, of type GPUFrontFace, defaulting to "ccw"

Defines which polygons are considered front-facing.

cullMode, of type GPUCullMode, defaulting to "none"

Defines which polygon orientation will be culled, if any.

unclippedDepth, of type boolean, defaulting to false

If true, indicates that depth clipping is disabled.

Requires the "depth-clip-control" feature to be enabled.

validating GPUPrimitiveState(descriptor, device) Arguments:

Device timeline steps:

  1. Return true if all of the following conditions are satisfied:

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip",
};

GPUPrimitiveTopology defines the primitive type draw calls made with a GPURenderPipeline will use. See § 23.2.5 Rasterization for additional details:

"point-list"

Each vertex defines a point primitive.

"line-list"

Each consecutive pair of two vertices defines a line primitive.

"line-strip"

Each vertex after the first defines a line primitive between it and the previous vertex.

"triangle-list"

Each consecutive triplet of three vertices defines a triangle primitive.

"triangle-strip"

Each vertex after the first two defines a triangle primitive between it and the previous two vertices.

enum GPUFrontFace {
    "ccw",
    "cw",
};

GPUFrontFace defines which polygons are considered front-facing by a GPURenderPipeline. See § 23.2.5.4 Polygon Rasterization for additional details:

"ccw"

Polygons with vertices whose framebuffer coordinates are given in counter-clockwise order are considered front-facing.

"cw"

Polygons with vertices whose framebuffer coordinates are given in clockwise order are considered front-facing.

enum GPUCullMode {
    "none",
    "front",
    "back",
};

GPUPrimitiveTopology defines which polygons will be culled by draw calls made with a GPURenderPipeline. See § 23.2.5.4 Polygon Rasterization for additional details:

"none"

No polygons are discarded.

"front"

Front-facing polygons are discarded.

"back"

Back-facing polygons are discarded.

Note: GPUFrontFace and GPUCullMode have no effect on "point-list", "line-list", or "line-strip" topologies.

10.3.3. Multisample State

dictionary GPUMultisampleState {
    GPUSize32 count = 1;
    GPUSampleMask mask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
};

GPUMultisampleState has the following members, which describe how a GPURenderPipeline interacts with a render pass’s multisampled attachments.

count, of type GPUSize32, defaulting to 1

Number of samples per pixel. This GPURenderPipeline will be compatible only with attachment textures (colorAttachments and depthStencilAttachment) with matching sampleCounts.

mask, of type GPUSampleMask, defaulting to 0xFFFFFFFF

Mask determining which samples are written to.

alphaToCoverageEnabled, of type boolean, defaulting to false

When true indicates that a fragment’s alpha channel should be used to generate a sample coverage mask.

validating GPUMultisampleState(descriptor) Arguments:

Device timeline steps:

  1. Return true if all of the following conditions are satisfied:

10.3.4. Fragment State

dictionary GPUFragmentState
         : GPUProgrammableStage {
    required sequence<GPUColorTargetState?> targets;
};
targets, of type sequence<GPUColorTargetState?>

A list of GPUColorTargetState defining the formats and behaviors of the color targets this pipeline writes to.

validating GPUFragmentState(device, descriptor, layout)

Arguments:

Device timeline steps:

  1. Return true if all of the following requirements are met:

Validating GPUFragmentState’s color attachment bytes per sample(device, targets)

Arguments:

Device timeline steps:

  1. Let formats be an empty list<GPUTextureFormat?>

  2. For each target in targets:

    1. If target is undefined, continue.

    2. Append target.format to formats.

  3. Calculating color attachment bytes per sample(formats) must be ≤ device.[[limits]].maxColorAttachmentBytesPerSample.

Note: The fragment shader may output more values than what the pipeline uses. If that is the case the values are ignored.

GPUBlendComponent component is a valid GPUBlendComponent with logical device device if it meets
the following requirements:

10.3.5. Color Target State

dictionary GPUColorTargetState {
    required GPUTextureFormat format;

    GPUBlendState blend;
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};
format, of type GPUTextureFormat

The GPUTextureFormat of this color target. The pipeline will only be compatible with GPURenderPassEncoders which use a GPUTextureView of this format in the corresponding color attachment.

blend, of type GPUBlendState

The blending behavior for this color target. If left undefined, disables blending for this color target.

writeMask, of type GPUColorWriteFlags, defaulting to 0xF

Bitmask controlling which channels are are written to when drawing to this color target.

dictionary GPUBlendState {
    required GPUBlendComponent color;
    required GPUBlendComponent alpha;
};
color, of type GPUBlendComponent

Defines the blending behavior of the corresponding render target for color channels.

alpha, of type GPUBlendComponent

Defines the blending behavior of the corresponding render target for the alpha channel.

typedef [EnforceRange] unsigned long GPUColorWriteFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUColorWrite {
    const GPUFlagsConstant RED   = 0x1;
    const GPUFlagsConstant GREEN = 0x2;
    const GPUFlagsConstant BLUE  = 0x4;
    const GPUFlagsConstant ALPHA = 0x8;
    const GPUFlagsConstant ALL   = 0xF;
};
10.3.5.1. Blend State
dictionary GPUBlendComponent {
    GPUBlendOperation operation = "add";
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
};

GPUBlendComponent has the following members, which describe how the color or alpha components of a fragment are blended:

operation, of type GPUBlendOperation, defaulting to "add"

Defines the GPUBlendOperation used to calculate the values written to the target attachment components.

srcFactor, of type GPUBlendFactor, defaulting to "one"

Defines the GPUBlendFactor operation to be performed on values from the fragment shader.

dstFactor, of type GPUBlendFactor, defaulting to "zero"

Defines the GPUBlendFactor operation to be performed on values from the target attachment.

The following tables use this notation to describe color components for a given fragment location:

RGBAsrc Color output by the fragment shader for the color attachment. If the shader doesn’t return an alpha channel, src-alpha blend factors cannot be used.
RGBAsrc1 Color output by the fragment shader for the color attachment with "@blend_src" attribute equal to 1. If the shader doesn’t return an alpha channel, src1-alpha blend factors cannot be used.
RGBAdst Color currently in the color attachment. Missing green/blue/alpha channels default to 0, 0, 1, respectively.
RGBAconst The current [[blendConstant]].
RGBAsrcFactor The source blend factor components, as defined by srcFactor.
RGBAdstFactor The destination blend factor components, as defined by dstFactor.
enum GPUBlendFactor {
    "zero",
    "one",
    "src",
    "one-minus-src",
    "src-alpha",
    "one-minus-src-alpha",
    "dst",
    "one-minus-dst",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "constant",
    "one-minus-constant",
    "src1",
    "one-minus-src1",
    "src1-alpha",
    "one-minus-src1-alpha",
};

GPUBlendFactor defines how either a source or destination blend factors is calculated:

GPUBlendFactor Blend factor RGBA components Feature
"zero" (0, 0, 0, 0)
"one" (1, 1, 1, 1)
"src" (Rsrc, Gsrc, Bsrc, Asrc)
"one-minus-src" (1 - Rsrc, 1 - Gsrc, 1 - Bsrc, 1 - Asrc)
"src-alpha" (Asrc, Asrc, Asrc, Asrc)
"one-minus-src-alpha" (1 - Asrc, 1 - Asrc, 1 - Asrc, 1 - Asrc)
"dst" (Rdst, Gdst, Bdst, Adst)
"one-minus-dst" (1 - Rdst, 1 - Gdst, 1 - Bdst, 1 - Adst)
"dst-alpha" (Adst, Adst, Adst, Adst)
"one-minus-dst-alpha" (1 - Adst, 1 - Adst, 1 - Adst, 1 - Adst)
"src-alpha-saturated" (min(Asrc, 1 - Adst), min(Asrc, 1 - Adst), min(Asrc, 1 - Adst), 1)
"constant" (Rconst, Gconst, Bconst, Aconst)
"one-minus-constant" (1 - Rconst, 1 - Gconst, 1 - Bconst, 1 - Aconst)
"src1" (Rsrc1, Gsrc1, Bsrc1, Asrc1) dual-source-blending
"one-minus-src1" (1 - Rsrc1, 1 - Gsrc1, 1 - Bsrc1, 1 - Asrc1)
"src1-alpha" (Asrc1, Asrc1, Asrc1, Asrc1)
"one-minus-src1-alpha" (1 - Asrc1, 1 - Asrc1, 1 - Asrc1, 1 - Asrc1)
enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max",
};

GPUBlendOperation defines the algorithm used to combine source and destination blend factors:

GPUBlendOperation RGBA Components
"add" RGBAsrc × RGBAsrcFactor + RGBAdst × RGBAdstFactor
"subtract" RGBAsrc × RGBAsrcFactor - RGBAdst × RGBAdstFactor
"reverse-subtract" RGBAdst × RGBAdstFactor - RGBAsrc × RGBAsrcFactor
"min" min(RGBAsrc, RGBAdst)
"max" max(RGBAsrc, RGBAdst)

10.3.6. Depth/Stencil State

dictionary GPUDepthStencilState {
    required GPUTextureFormat format;

    boolean depthWriteEnabled;
    GPUCompareFunction depthCompare;

    GPUStencilFaceState stencilFront = {};
    GPUStencilFaceState stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};

GPUDepthStencilState has the following members, which describe how a GPURenderPipeline will affect a render pass’s depthStencilAttachment:

format, of type GPUTextureFormat

The format of depthStencilAttachment this GPURenderPipeline will be compatible with.

depthWriteEnabled, of type boolean

Indicates if this GPURenderPipeline can modify depthStencilAttachment depth values.

depthCompare, of type GPUCompareFunction

The comparison operation used to test fragment depths against depthStencilAttachment depth values.

stencilFront, of type GPUStencilFaceState, defaulting to {}

Defines how stencil comparisons and operations are performed for front-facing primitives.

stencilBack, of type GPUStencilFaceState, defaulting to {}

Defines how stencil comparisons and operations are performed for back-facing primitives.

stencilReadMask, of type GPUStencilValue, defaulting to 0xFFFFFFFF

Bitmask controlling which depthStencilAttachment stencil value bits are read when performing stencil comparison tests.

stencilWriteMask, of type GPUStencilValue, defaulting to 0xFFFFFFFF

Bitmask controlling which depthStencilAttachment stencil value bits are written to when performing stencil operations.

depthBias, of type GPUDepthBias, defaulting to 0

Constant depth bias added to each triangle fragment. See biased fragment depth for details.

depthBiasSlopeScale, of type float, defaulting to 0

Depth bias that scales with the triangle fragment’s slope. See biased fragment depth for details.

depthBiasClamp, of type float, defaulting to 0

The maximum depth bias of a triangle fragment. See biased fragment depth for details.

Note: depthBias, depthBiasSlopeScale, and depthBiasClamp have no effect on "point-list", "line-list", and "line-strip" primitives, and must be 0.

The biased fragment depth for a fragment being written to depthStencilAttachment attachment when drawing using GPUDepthStencilState state is calculated by running the following queue timeline steps:
  1. Let format be attachment.view.format.

  2. Let r be the minimum positive representable value > 0 in the format converted to a 32-bit float.

  3. Let maxDepthSlope be the maximum of the horizontal and vertical slopes of the fragment’s depth value.

  4. If format is a unorm format:

    1. Let bias be (float)state.depthBias * r + state.depthBiasSlopeScale * maxDepthSlope.

  5. Otherwise, if format is a float format:

    1. Let bias be (float)state.depthBias * 2^(exp(max depth in primitive) - r) + state.depthBiasSlopeScale * maxDepthSlope.

  6. If state.depthBiasClamp > 0:

    1. Set bias to min(state.depthBiasClamp, bias).

  7. Otherwise if state.depthBiasClamp < 0:

    1. Set bias to max(state.depthBiasClamp, bias).

  8. If state.depthBias0 or state.depthBiasSlopeScale0:

    1. Set the fragment depth value to fragment depth value + bias

validating GPUDepthStencilState(descriptor, topology)

Arguments:

Device timeline steps:

  1. Return true if, and only if, all of the following conditions are satisfied:

dictionary GPUStencilFaceState {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};

GPUStencilFaceState has the following members, which describe how stencil comparisons and operations are performed:

compare, of type GPUCompareFunction, defaulting to "always"

The GPUCompareFunction used when testing the [[stencilReference]] value against the fragment’s depthStencilAttachment stencil values.

failOp, of type GPUStencilOperation, defaulting to "keep"

The GPUStencilOperation performed if the fragment stencil comparison test described by compare fails.

depthFailOp, of type GPUStencilOperation, defaulting to "keep"

The GPUStencilOperation performed if the fragment depth comparison described by depthCompare fails.

passOp, of type GPUStencilOperation, defaulting to "keep"

The GPUStencilOperation performed if the fragment stencil comparison test described by compare passes.

enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap",
};

GPUStencilOperation defines the following operations:

"keep"

Keep the current stencil value.

"zero"

Set the stencil value to 0.

"replace"

Set the stencil value to [[stencilReference]].

"invert"

Bitwise-invert the current stencil value.

"increment-clamp"

Increments the current stencil value, clamping to the maximum representable value of the depthStencilAttachment’s stencil aspect.

"decrement-clamp"

Decrement the current stencil value, clamping to 0.

"increment-wrap"

Increments the current stencil value, wrapping to zero if the value exceeds the maximum representable value of the depthStencilAttachment’s stencil aspect.

"decrement-wrap"

Decrement the current stencil value, wrapping to the maximum representable value of the depthStencilAttachment’s stencil aspect if the value goes below 0.

10.3.7. Vertex State

enum GPUIndexFormat {
    "uint16",
    "uint32",
};

The index format determines both the data type of index values in a buffer and, when used with strip primitive topologies ("line-strip" or "triangle-strip") also specifies the primitive restart value. The primitive restart value indicates which index value indicates that a new primitive should be started rather than continuing to construct the triangle strip with the prior indexed vertices.

GPUPrimitiveStates that specify a strip primitive topology must specify a stripIndexFormat if they are used for indexed draws so that the primitive restart value that will be used is known at pipeline creation time. GPUPrimitiveStates that specify a list primitive topology will use the index format passed to setIndexBuffer() when doing indexed rendering.

Index format Byte size Primitive restart value
"uint16" 2 0xFFFF
"uint32" 4 0xFFFFFFFF
10.3.7.1. Vertex Formats

The GPUVertexFormat of a vertex attribute indicates how data from a vertex buffer will be interpreted and exposed to the shader. The name of the format specifies the order of components, bits per component, and vertex data type for the component.

Each vertex data type can map to any WGSL scalar type of the same base type, regardless of the bits per component:

Vertex format prefix Vertex data type Compatible WGSL types
uint unsigned int u32
sint signed int i32
unorm unsigned normalized f16, f32
snorm signed normalized
float floating point

The multi-component formats specify the number of components after "x". Mismatches in the number of components between the vertex format and shader type are allowed, with components being either dropped or filled with default values to compensate.

A vertex attribute with a format of "unorm8x2" and byte values [0x7F, 0xFF] can be accessed in the shader with the following types:
Shader type Shader value
f16 0.5h
f32 0.5f
vec2<f16> vec2(0.5h, 1.0h)
vec2<f32> vec2(0.5f, 1.0f)
vec3<f16> vec2(0.5h, 1.0h, 0.0h)
vec3<f32> vec2(0.5f, 1.0f, 0.0f)
vec4<f16> vec2(0.5h, 1.0h, 0.0h, 1.0h)
vec4<f32> vec2(0.5f, 1.0f, 0.0f, 1.0f)

See § 23.2.2 Vertex Processing for additional information about how vertex formats are exposed in the shader.

enum GPUVertexFormat {
    "uint8",
    "uint8x2",
    "uint8x4",
    "sint8",
    "sint8x2",
    "sint8x4",
    "unorm8",
    "unorm8x2",
    "unorm8x4",
    "snorm8",
    "snorm8x2",
    "snorm8x4",
    "uint16",
    "uint16x2",
    "uint16x4",
    "sint16",
    "sint16x2",
    "sint16x4",
    "unorm16",
    "unorm16x2",
    "unorm16x4",
    "snorm16",
    "snorm16x2",
    "snorm16x4",
    "float16",
    "float16x2",
    "float16x4",
    "float32",
    "float32x2",
    "float32x3",
    "float32x4",
    "uint32",
    "uint32x2",
    "uint32x3",
    "uint32x4",
    "sint32",
    "sint32x2",
    "sint32x3",
    "sint32x4",
    "unorm10-10-10-2",
    "unorm8x4-bgra",
};
Vertex format Data type Components byteSize Example WGSL type
"uint8" unsigned int 1 1 u32
"uint8x2" unsigned int 2 2 vec2<u32>
"uint8x4" unsigned int 4 4 vec4<u32>
"sint8" signed int 1 1 i32
"sint8x2" signed int 2 2 vec2<i32>
"sint8x4" signed int 4 4 vec4<i32>
"unorm8" unsigned normalized 1 1 f32
"unorm8x2" unsigned normalized 2 2 vec2<f32>
"unorm8x4" unsigned normalized 4 4 vec4<f32>
"snorm8" signed normalized 1 1 f32
"snorm8x2" signed normalized 2 2 vec2<f32>
"snorm8x4" signed normalized 4 4 vec4<f32>
"uint16" unsigned int 1 2 u32
"uint16x2" unsigned int 2 4 vec2<u32>
"uint16x4" unsigned int 4 8 vec4<u32>
"sint16" signed int 1 2 i32
"sint16x2" signed int 2 4 vec2<i32>
"sint16x4" signed int 4 8 vec4<i32>
"unorm16" unsigned normalized 1 2 f32
"unorm16x2" unsigned normalized 2 4 vec2<f32>
"unorm16x4" unsigned normalized 4 8 vec4<f32>
"snorm16" signed normalized 1 2 f32
"snorm16x2" signed normalized 2 4 vec2<f32>
"snorm16x4" signed normalized 4 8 vec4<f32>
"float16" float 1 2 f32
"float16x2" float 2 4 vec2<f16>
"float16x4" float 4 8 vec4<f16>
"float32" float 1 4 f32
"float32x2" float 2 8 vec2<f32>
"float32x3" float 3 12 vec3<f32>
"float32x4" float 4 16 vec4<f32>
"uint32" unsigned int 1 4 u32
"uint32x2" unsigned int 2 8 vec2<u32>
"uint32x3" unsigned int 3 12 vec3<u32>
"uint32x4" unsigned int 4 16 vec4<u32>
"sint32" signed int 1 4 i32
"sint32x2" signed int 2 8 vec2<i32>
"sint32x3" signed int 3 12 vec3<i32>
"sint32x4" signed int 4 16 vec4<i32>
"unorm10-10-10-2" unsigned normalized 4 4 vec4<f32>
"unorm8x4-bgra" unsigned normalized 4 4 vec4<f32>
enum GPUVertexStepMode {
    "vertex",
    "instance",
};

The step mode configures how an address for vertex buffer data is computed, based on the current vertex or instance index:

"vertex"

The address is advanced by arrayStride for each vertex, and reset between instances.

"instance"

The address is advanced by arrayStride for each instance.

dictionary GPUVertexState
         : GPUProgrammableStage {
    sequence<GPUVertexBufferLayout?> buffers = [];
};
buffers, of type sequence<GPUVertexBufferLayout?>, defaulting to []

A list of GPUVertexBufferLayouts, each defining the layout of vertex attribute data in a vertex buffer used by this pipeline.

A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride is the stride, in bytes, between elements of that array. Each element of a vertex buffer is like a structure with a memory layout defined by its attributes, which describe the members of the structure.

Each GPUVertexAttribute describes its format and its offset, in bytes, within the structure.

Each attribute appears as a separate input in a vertex shader, each bound by a numeric location, which is specified by shaderLocation. Every location must be unique within the GPUVertexState.

dictionary GPUVertexBufferLayout {
    required GPUSize64 arrayStride;
    GPUVertexStepMode stepMode = "vertex";
    required sequence<GPUVertexAttribute> attributes;
};
arrayStride, of type GPUSize64

The stride, in bytes, between elements of this array.

stepMode, of type GPUVertexStepMode, defaulting to "vertex"

Whether each element of this array represents per-vertex data or per-instance data

attributes, of type sequence<GPUVertexAttribute>

An array defining the layout of the vertex attributes within each element.

dictionary GPUVertexAttribute {
    required GPUVertexFormat format;
    required GPUSize64 offset;

    required GPUIndex32 shaderLocation;
};
format, of type GPUVertexFormat

The GPUVertexFormat of the attribute.

offset, of type GPUSize64

The offset, in bytes, from the beginning of the element to the data for the attribute.

shaderLocation, of type GPUIndex32

The numeric location associated with this attribute, which will correspond with a "@location" attribute declared in the vertex.module.

validating GPUVertexBufferLayout(device, descriptor)

Arguments:

Device timeline steps:

  1. Return true, if and only if, all of the following conditions are satisfied:

validating GPUVertexState(device, descriptor, layout)

Arguments:

Device timeline steps:

  1. Let entryPoint be get the entry point(VERTEX, descriptor).

  2. Assert entryPoint is not null.

  3. All of the requirements in the following steps must be met.

    1. validating GPUProgrammableStage(VERTEX, descriptor, layout, device) must succeed.

    2. descriptor.buffers.size must be ≤ device.[[device]].[[limits]].maxVertexBuffers.

    3. Each vertexBuffer layout descriptor in the list descriptor.buffers must pass validating GPUVertexBufferLayout(device, vertexBuffer).

    4. The sum of vertexBuffer.attributes.size, over every vertexBuffer in descriptor.buffers, must be ≤ device.[[device]].[[limits]].maxVertexAttributes.

    5. For every vertex attribute declaration (at location location with type T) that is statically used by entryPoint, there must be exactly one pair (i, j) for which descriptor.buffers[i]?.attributes[j].shaderLocation == location.

      Let attrib be that GPUVertexAttribute.

    6. T must be compatible with attrib.format’s vertex data type:

      "unorm", "snorm", or "float"

      T must be f32 or vecN<f32>.

      "uint"

      T must be u32 or vecN<u32>.

      "sint"

      T must be i32 or vecN<i32>.

11. Copies

11.1. Buffer Copies

Buffer copy operations operate on raw bytes.

WebGPU provides "buffered" GPUCommandEncoder commands:

and "immediate" GPUQueue operations:

11.2. Texel Copies

Texel copy operations operate on texture/"image" data, rather than bytes.

WebGPU provides "buffered" GPUCommandEncoder commands:

and "immediate" GPUQueue operations:

During a texel copy texels are copied over with an equivalent texel representation. Texel copies only guarantee that valid, normal numeric values in the source have the same numeric value in the destination, and may not preserve the bit-representations of the the following values:

Note: Copies may be performed with WGSL shaders, which means that any of the documented WGSL floating point behaviors may be observed.

The following definitions are used by these methods:

11.2.1. GPUTexelCopyBufferLayout

"GPUTexelCopyBufferLayout" describes the "layout" of texels in a "buffer" of bytes (GPUBuffer or AllowSharedBufferSource) in a "texel copy" operation.

dictionary GPUTexelCopyBufferLayout {
    GPUSize64 offset = 0;
    GPUSize32 bytesPerRow;
    GPUSize32 rowsPerImage;
};

A texel image is comprised of one or more rows of texel blocks, referred to here as texel block rows. Each texel block row of a texel image must contain the same number of texel blocks, and all texel blocks in a texel image are of the same GPUTextureFormat.

A GPUTexelCopyBufferLayout is a layout of texel images within some linear memory. It’s used when copying data between a texture and a GPUBuffer, or when scheduling a write into a texture from the GPUQueue.

Operations that copy between byte arrays and textures always operate on whole texel block. It’s not possible to update only a part of a texel block.

Texel blocks are tightly packed within each texel block row in the linear memory layout of a texel copy, with each subsequent texel block immediately following the previous texel block, with no padding. This includes copies to/from specific aspects of depth-or-stencil format textures: stencil values are tightly packed in an array of bytes; depth values are tightly packed in an array of the appropriate type ("depth16unorm" or "depth32float").

offset, of type GPUSize64, defaulting to 0

The offset, in bytes, from the beginning of the texel data source (such as a GPUTexelCopyBufferInfo.buffer) to the start of the texel data within that source.

bytesPerRow, of type GPUSize32

The stride, in bytes, between the beginning of each texel block row and the subsequent texel block row.

Required if there are multiple texel block rows (i.e. the copy height or depth is more than one block).

rowsPerImage, of type GPUSize32

Number of texel block rows per single texel image of the texture. rowsPerImage × bytesPerRow is the stride, in bytes, between the beginning of each texel image of data and the subsequent texel image.

Required if there are multiple texel images (i.e. the copy depth is more than one).

11.2.2. GPUTexelCopyBufferInfo

"GPUTexelCopyBufferInfo" describes the "info" (GPUBuffer and GPUTexelCopyBufferLayout) about a "buffer" source or destination of a "texel copy" operation. Together with the copySize, it describes the footprint of a region of texels in a GPUBuffer.

dictionary GPUTexelCopyBufferInfo
         : GPUTexelCopyBufferLayout {
    required GPUBuffer buffer;
};
buffer, of type GPUBuffer

A buffer which either contains texel data to be copied or will store the texel data being copied, depending on the method it is being passed to.

validating GPUTexelCopyBufferInfo

Arguments:

Returns: boolean

Device timeline steps:

  1. Return true if and only if all of the following conditions are satisfied:

11.2.3. GPUTexelCopyTextureInfo

"GPUTexelCopyTextureInfo" describes the "info" (GPUTexture, etc.) about a "texture" source or destination of a "texel copy" operation. Together with the copySize, it describes a sub-region of a texture (spanning one or more contiguous texture subresources at the same mip-map level).

dictionary GPUTexelCopyTextureInfo {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUOrigin3D origin = {};
    GPUTextureAspect aspect = "all";
};
texture, of type GPUTexture

Texture to copy to/from.

mipLevel, of type GPUIntegerCoordinate, defaulting to 0

Mip-map level of the texture to copy to/from.

origin, of type GPUOrigin3D, defaulting to {}

Defines the origin of the copy - the minimum corner of the texture sub-region to copy to/from. Together with copySize, defines the full copy sub-region.

aspect, of type GPUTextureAspect, defaulting to "all"

Defines which aspects of the texture to copy to/from.

The texture copy sub-region for depth slice or array layer index of GPUTexelCopyTextureInfo copyTexture is determined by running the following steps:
  1. Let texture be copyTexture.texture.

  2. If texture.dimension is:

    1d
    1. Assert index is 0

    2. Let depthSliceOrLayer be texture

    2d

    Let depthSliceOrLayer be array layer index of texture

    3d

    Let depthSliceOrLayer be depth slice index of texture

  3. Let textureMip be mip level copyTexture.mipLevel of depthSliceOrLayer.

  4. Return aspect copyTexture.aspect of textureMip.

The texel block byte offset of data described by GPUTexelCopyBufferLayout bufferLayout corresponding to texel block x, y of depth slice or array layer z of a GPUTexture texture is determined by running the following steps:
  1. Let blockBytes be the texel block copy footprint of texture.format.

  2. Let imageOffset be (z × bufferLayout.rowsPerImage × bufferLayout.bytesPerRow) + bufferLayout.offset.

  3. Let rowOffset be (y × bufferLayout.bytesPerRow) + imageOffset.

  4. Let blockOffset be (x × blockBytes) + rowOffset.

  5. Return blockOffset.

validating GPUTexelCopyTextureInfo(texelCopyTextureInfo, copySize)

Arguments:

Returns: boolean

Device timeline steps:

  1. Let blockWidth be the texel block width of texelCopyTextureInfo.texture.format.

  2. Let blockHeight be the texel block height of texelCopyTextureInfo.texture.format.

  3. Return true if and only if all of the following conditions apply:

validating texture buffer copy(texelCopyTextureInfo, bufferLayout, dataLength, copySize, textureUsage, aligned)

Arguments:

Returns: boolean

Device timeline steps:

  1. Let texture be texelCopyTextureInfo.texture

  2. Let aspectSpecificFormat = texture.format.

  3. Let offsetAlignment = texel block copy footprint of texture.format.

  4. Return true if and only if all of the following conditions apply:

    1. validating GPUTexelCopyTextureInfo(texelCopyTextureInfo, copySize) returns true.

    2. texture.sampleCount is 1.

    3. texture.usage contains textureUsage.

    4. If texture.format is a depth-or-stencil format format:

      1. texelCopyTextureInfo.aspect must refer to a single aspect of texture.format.

      2. If textureUsage is:

        COPY_SRC

        That aspect must be a valid texel copy source according to § 26.1.2 Depth-stencil formats.

        COPY_DST

        That aspect must be a valid texel copy destination according to § 26.1.2 Depth-stencil formats.

      3. Set aspectSpecificFormat to the aspect-specific format according to § 26.1.2 Depth-stencil formats.

      4. Set offsetAlignment to 4.

    5. If aligned is true:

      1. bufferLayout.offset is a multiple of offsetAlignment.

    6. validating linear texture data(bufferLayout, dataLength, aspectSpecificFormat, copySize) succeeds.

11.2.4. GPUCopyExternalImageDestInfo

WebGPU textures hold raw numeric data, and are not tagged with semantic metadata describing colors. However, copyExternalImageToTexture() copies from sources that describe colors.

"GPUCopyExternalImageDestInfo" describes the "info" about the "destination" of a "copyExternalImageToTexture()" operation. It is a GPUTexelCopyTextureInfo which is additionally tagged with color space/encoding and alpha-premultiplication metadata, so that semantic color data may be preserved during copies. This metadata affects only the semantics of the copy operation operation, not the state or semantics of the destination texture object.

dictionary GPUCopyExternalImageDestInfo
         : GPUTexelCopyTextureInfo {
    PredefinedColorSpace colorSpace = "srgb";
    boolean premultipliedAlpha = false;
};
colorSpace, of type PredefinedColorSpace, defaulting to "srgb"

Describes the color space and encoding used to encode data into the destination texture.

This may result in values outside of the range [0, 1] being written to the target texture, if its format can represent them. Otherwise, the results are clamped to the target texture format’s range.

Note: If colorSpace matches the source image, conversion may not be necessary. See § 3.10.2 Color Space Conversion Elision.

premultipliedAlpha, of type boolean, defaulting to false

Describes whether the data written into the texture should have its RGB channels premultiplied by the alpha channel, or not.

If this option is set to true and the source is also premultiplied, the source RGB values must be preserved even if they exceed their corresponding alpha values.

Note: If premultipliedAlpha matches the source image, conversion may not be necessary. See § 3.10.2 Color Space Conversion Elision.

11.2.5. GPUCopyExternalImageSourceInfo

"GPUCopyExternalImageSourceInfo" describes the "info" about the "source" of a "copyExternalImageToTexture()" operation.

typedef (ImageBitmap or
         ImageData or
         HTMLImageElement or
         HTMLVideoElement or
         VideoFrame or
         HTMLCanvasElement or
         OffscreenCanvas) GPUCopyExternalImageSource;

dictionary GPUCopyExternalImageSourceInfo {
    required GPUCopyExternalImageSource source;
    GPUOrigin2D origin = {};
    boolean flipY = false;
};

GPUCopyExternalImageSourceInfo has the following members:

source, of type GPUCopyExternalImageSource

The source of the texel copy. The copy source data is captured at the moment that copyExternalImageToTexture() is issued. Source size is determined as described by the external source dimensions table.

origin, of type GPUOrigin2D, defaulting to {}

Defines the origin of the copy - the minimum (top-left) corner of the source sub-region to copy from. Together with copySize, defines the full copy sub-region.

flipY, of type boolean, defaulting to false

Describes whether the source image is vertically flipped, or not.

If this option is set to true, the copy is flipped vertically: the bottom row of the source region is copied into the first row of the destination region, and so on. The origin option is still relative to the top-left corner of the source image, increasing downward.

When external sources are used when creating or copying to textures, the external source dimensions are defined by the source type, given by this table:

External Source type Dimensions
ImageBitmap ImageBitmap.width, ImageBitmap.height
HTMLImageElement HTMLImageElement.naturalWidth, HTMLImageElement.naturalHeight
HTMLVideoElement intrinsic width of the frame, intrinsic height of the frame
VideoFrame VideoFrame.displayWidth, VideoFrame.displayHeight
ImageData ImageData.width, ImageData.height
HTMLCanvasElement or OffscreenCanvas with CanvasRenderingContext2D or GPUCanvasContext HTMLCanvasElement.width, HTMLCanvasElement.height
HTMLCanvasElement or OffscreenCanvas with WebGLRenderingContextBase WebGLRenderingContextBase.drawingBufferWidth, WebGLRenderingContextBase.drawingBufferHeight
HTMLCanvasElement or OffscreenCanvas with ImageBitmapRenderingContext ImageBitmapRenderingContext’s internal output bitmap ImageBitmap.width, ImageBitmap.height

11.2.6. Subroutines

GPUTexelCopyTextureInfo physical subresource size

Arguments:

Returns: GPUExtent3D

The GPUTexelCopyTextureInfo physical subresource size of texelCopyTextureInfo is calculated as follows:

Its width, height and depthOrArrayLayers are the width, height, and depth, respectively, of the physical miplevel-specific texture extent of texelCopyTextureInfo.texture subresource at mipmap level texelCopyTextureInfo.mipLevel.

validating linear texture data(layout, byteSize, format, copyExtent)

Arguments:

GPUTexelCopyBufferLayout layout

Layout of the linear texture data.

GPUSize64 byteSize

Total size of the linear data, in bytes.

GPUTextureFormat format

Format of the texture.

GPUExtent3D copyExtent

Extent of the texture to copy.

Device timeline steps:

  1. Let:

  2. Fail if the following input validation requirements are not met:

  3. Let:

    Note: These default values have no effect, as they’re always multiplied by 0.

  4. Let requiredBytesInCopy be 0.

  5. If copyExtent.depthOrArrayLayers > 0:

    1. Increment requiredBytesInCopy by bytesPerRow × rowsPerImage × (copyExtent.depthOrArrayLayers − 1).

    2. If heightInBlocks > 0:

      1. Increment requiredBytesInCopy by bytesPerRow × (heightInBlocks − 1) + bytesInLastRow.

  6. Fail if the following condition is not satisfied:

    • The layout fits inside the linear data: layout.offset + requiredBytesInCopybyteSize.

validating texture copy range

Arguments:

GPUTexelCopyTextureInfo texelCopyTextureInfo

The texture subresource being copied into and copy origin.

GPUExtent3D copySize

The size of the texture.

Device timeline steps:

  1. Let blockWidth be the texel block width of texelCopyTextureInfo.texture.format.

  2. Let blockHeight be the texel block height of texelCopyTextureInfo.texture.format.

  3. Let subresourceSize be the GPUTexelCopyTextureInfo physical subresource size of texelCopyTextureInfo.

  4. Return whether all the conditions below are satisfied:

    Note: The texture copy range is validated against the physical (rounded-up) size for compressed formats, allowing copies to access texture blocks which are not fully inside the texture.

Two GPUTextureFormats format1 and format2 are copy-compatible if:
The set of subresources for texture copy(texelCopyTextureInfo, copySize) is the subset of subresources of texture = texelCopyTextureInfo.texture for which each subresource s satisfies the following:

12. Command Buffers

Command buffers are pre-recorded lists of GPU commands (blocks of queue timeline steps) that can be submitted to a GPUQueue for execution. Each GPU command represents a task to be performed on the queue timeline, such as setting state, drawing, copying resources, etc.

A GPUCommandBuffer can only be submitted once, at which point it becomes invalidated. To reuse rendering commands across multiple submissions, use GPURenderBundle.

12.1. GPUCommandBuffer

[Exposed=(Window, Worker), SecureContext]
interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

GPUCommandBuffer has the following device timeline properties:

[[command_list]], of type list<GPU command>, readonly

A list of GPU commands to be executed on the Queue timeline when this command buffer is submitted.

[[renderState]], of type RenderState, initially null

The current state used by any render pass commands being executed.

12.1.1. Command Buffer Creation

dictionary GPUCommandBufferDescriptor
         : GPUObjectDescriptorBase {
};

13. Command Encoding

13.1. GPUCommandsMixin

GPUCommandsMixin defines state common to all interfaces which encode commands. It has no methods.

interface mixin GPUCommandsMixin {
};

GPUCommandsMixin has the following device timeline properties:

[[state]], of type encoder state, initially "open"

The current state of the encoder.

[[commands]], of type list<GPU command>, initially []

A list of GPU commands to be executed on the Queue timeline when a GPUCommandBuffer containing these commands is submitted.

The encoder state may be one of the following:

"open"

The encoder is available to encode new commands.

"locked"

The encoder cannot be used, because it is locked by a child encoder: it is a GPUCommandEncoder, and a GPURenderPassEncoder or GPUComputePassEncoder is active. The encoder becomes "open" again when the pass is ended.

Any command issued in this state invalidates the encoder.

"ended"

The encoder has been ended and new commands can no longer be encoded.

Any command issued in this state will generate a validation error.

To Validate the encoder state of GPUCommandsMixin encoder run the
following device timeline steps:
  1. If encoder.[[state]] is:

    "open"

    Return true.

    "locked"

    Invalidate encoder and return false.

    "ended"

    Generate a validation error, and return false.

To Enqueue a command on GPUCommandsMixin encoder which issues the steps of a GPU Command command, run the following device timeline steps:
  1. Append command to encoder.[[commands]].

  2. When command is executed as part of a GPUCommandBuffer:

    1. Issue the steps of command.

13.2. GPUCommandEncoder

[Exposed=(Window, Worker), SecureContext]
interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUBuffer destination,
        optional GPUSize64 size);
    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        optional GPUSize64 size);

    undefined copyBufferToTexture(
        GPUTexelCopyBufferInfo source,
        GPUTexelCopyTextureInfo destination,
        GPUExtent3D copySize);

    undefined copyTextureToBuffer(
        GPUTexelCopyTextureInfo source,
        GPUTexelCopyBufferInfo destination,
        GPUExtent3D copySize);

    undefined copyTextureToTexture(
        GPUTexelCopyTextureInfo source,
        GPUTexelCopyTextureInfo destination,
        GPUExtent3D copySize);

    undefined clearBuffer(
        GPUBuffer buffer,
        optional GPUSize64 offset = 0,
        optional GPUSize64 size);

    undefined resolveQuerySet(
        GPUQuerySet querySet,
        GPUSize32 firstQuery,
        GPUSize32 queryCount,
        GPUBuffer destination,
        GPUSize64 destinationOffset);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;
GPUCommandEncoder includes GPUCommandsMixin;
GPUCommandEncoder includes GPUDebugCommandsMixin;

13.2.1. Command Encoder Creation

dictionary GPUCommandEncoderDescriptor
         : GPUObjectDescriptorBase {
};
createCommandEncoder(descriptor)

Creates a GPUCommandEncoder.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createCommandEncoder(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUCommandEncoderDescriptor Description of the GPUCommandEncoder to create.

Returns: GPUCommandEncoder

Content timeline steps:

  1. Let e be ! create a new WebGPU object(this, GPUCommandEncoder, descriptor).

  2. Issue the initialization steps on the Device timeline of this.

  3. Return e.

Device timeline initialization steps:
  1. If any of the following conditions are unsatisfied generate a validation error, invalidate e and return.

    • this must not be lost.

Creating a GPUCommandEncoder, encoding a command to clear a buffer, finishing the encoder to get a GPUCommandBuffer, then submitting it to the GPUQueue.
const commandEncoder = gpuDevice.createCommandEncoder();
commandEncoder.clearBuffer(buffer);
const commandBuffer = commandEncoder.finish();
gpuDevice.queue.submit([commandBuffer]);

13.3. Pass Encoding

beginRenderPass(descriptor)

Begins encoding a render pass described by descriptor.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.beginRenderPass(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderPassDescriptor Description of the GPURenderPassEncoder to create.

Returns: GPURenderPassEncoder

Content timeline steps:

  1. For each non-null colorAttachment in descriptor.colorAttachments:

    1. If colorAttachment.clearValue is provided:

      1. ? validate GPUColor shape(colorAttachment.clearValue).

  2. Let pass be a new GPURenderPassEncoder object.

  3. Issue the initialization steps on the Device timeline of this.

  4. Return pass.

Device timeline initialization steps:
  1. Validate the encoder state of this. If it returns false, invalidate pass and return.

  2. Set this.[[state]] to "locked".

  3. Let attachmentRegions be a list of [texture subresource, depthSlice?] pairs, initially empty. Each pair describes the region of the texture to be rendered to, which includes a single depth slice for "3d" textures only.

  4. For each non-null colorAttachment in descriptor.colorAttachments:

    1. Add [colorAttachment.view, colorAttachment.depthSlice ?? null] to attachmentRegions.

    2. If colorAttachment.resolveTarget is not null:

      1. Add [colorAttachment.resolveTarget, undefined] to attachmentRegions.

  5. If any of the following requirements are unmet, invalidate pass and return.

    • descriptor must meet the Valid Usage rules given device this.[[device]].

    • The set of texture regions in attachmentRegions must be pairwise disjoint. That is, no two texture regions may overlap.

  6. Add each texture subresource in attachmentRegions to pass.[[usage scope]] with usage attachment.

  7. Let depthStencilAttachment be descriptor.depthStencilAttachment.

  8. If depthStencilAttachment is not null:

    1. Let depthStencilView be depthStencilAttachment.view.

    2. Add the depth subresource of depthStencilView, if any, to pass.[[usage scope]] with usage attachment-read if depthStencilAttachment.depthReadOnly is true, or attachment otherwise.

    3. Add the stencil subresource of depthStencilView, if any, to pass.[[usage scope]] with usage attachment-read if depthStencilAttachment.stencilReadOnly is true, or attachment otherwise.

    4. Set pass.[[depthReadOnly]] to depthStencilAttachment.depthReadOnly.

    5. Set pass.[[stencilReadOnly]] to depthStencilAttachment.stencilReadOnly.

  9. Set pass.[[layout]] to derive render targets layout from pass(descriptor).

  10. If descriptor.timestampWrites is provided:

    1. Let timestampWrites be descriptor.timestampWrites.

    2. If timestampWrites.beginningOfPassWriteIndex is provided, append a GPU command to this.[[commands]] with the following steps:

      1. Before the pass commands begin executing, write the current queue timestamp into index timestampWrites.beginningOfPassWriteIndex of timestampWrites.querySet.

    3. If timestampWrites.endOfPassWriteIndex is provided, set pass.[[endTimestampWrite]] to a GPU command with the following steps:

      1. After the pass commands finish executing, write the current queue timestamp into index timestampWrites.endOfPassWriteIndex of timestampWrites.querySet.

  11. Set pass.[[drawCount]] to 0.

  12. Set pass.[[maxDrawCount]] to descriptor.maxDrawCount.

  13. Set pass.[[maxDrawCount]] to descriptor.maxDrawCount.

  14. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let the [[renderState]] of the currently executing GPUCommandBuffer be a new RenderState.

  2. Set [[renderState]].[[colorAttachments]] to descriptor.colorAttachments.

  3. Set [[renderState]].[[depthStencilAttachment]] to descriptor.depthStencilAttachment.

  4. For each non-null colorAttachment in descriptor.colorAttachments:

    1. Let colorView be colorAttachment.view.

    2. If colorView.[[descriptor]].dimension is:

      "3d"

      Let colorSubregion be colorAttachment.depthSlice of colorView.

      Otherwise

      Let colorSubregion be colorView.

    3. If colorAttachment.loadOp is:

      "load"

      Ensure the contents of colorSubregion are loaded into the framebuffer memory associated with colorSubregion.

      "clear"

      Set every texel of the framebuffer memory associated with colorSubregion to colorAttachment.clearValue.

  5. If depthStencilAttachment is not null:

    1. If depthStencilAttachment.depthLoadOp is:

      Not provided

      Assert that depthStencilAttachment.depthReadOnly is true and ensure the contents of the depth subresource of depthStencilView are loaded into the framebuffer memory associated with depthStencilView.

      "load"

      Ensure the contents of the depth subresource of depthStencilView are loaded into the framebuffer memory associated with depthStencilView.

      "clear"

      Set every texel of the framebuffer memory associated with the depth subresource of depthStencilView to depthStencilAttachment.depthClearValue.

    2. If depthStencilAttachment.stencilLoadOp is:

      Not provided

      Assert that depthStencilAttachment.stencilReadOnly is true and ensure the contents of the stencil subresource of depthStencilView are loaded into the framebuffer memory associated with depthStencilView.

      "load"

      Ensure the contents of the stencil subresource of depthStencilView are loaded into the framebuffer memory associated with depthStencilView.

      "clear"

      Set every texel of the framebuffer memory associated with the stencil subresource depthStencilView to depthStencilAttachment.stencilClearValue.

Note: Read-only depth-stencil attachments are implicitly treated as though the "load" operation was used. Validation that requires the load op to not be provided for read-only attachments is done in GPURenderPassDepthStencilAttachment Valid Usage.

beginComputePass(descriptor)

Begins encoding a compute pass described by descriptor.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.beginComputePass(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUComputePassDescriptor

Returns: GPUComputePassEncoder

Content timeline steps:

  1. Let pass be a new GPUComputePassEncoder object.

  2. Issue the initialization steps on the Device timeline of this.

  3. Return pass.

Device timeline initialization steps:
  1. Validate the encoder state of this. If it returns false, invalidate pass and return.

  2. Set this.[[state]] to "locked".

  3. If any of the following requirements are unmet, invalidate pass and return.

  4. If descriptor.timestampWrites is provided:

    1. Let timestampWrites be descriptor.timestampWrites.

    2. If timestampWrites.beginningOfPassWriteIndex is provided, append a GPU command to this.[[commands]] with the following steps:

      1. Before the pass commands begin executing, write the current queue timestamp into index timestampWrites.beginningOfPassWriteIndex of timestampWrites.querySet.

    3. If timestampWrites.endOfPassWriteIndex is provided, set pass.[[endTimestampWrite]] to a GPU command with the following steps:

      1. After the pass commands finish executing, write the current queue timestamp into index timestampWrites.endOfPassWriteIndex of timestampWrites.querySet.

13.4. Buffer Copy Commands

copyBufferToBuffer() has two overloads:

copyBufferToBuffer(source, destination, size)

Shorthand, equivalent to copyBufferToBuffer(source, 0, destination, 0, size).

copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of a GPUBuffer to a sub-region of another GPUBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size) method.
Parameter Type Nullable Optional Description
source GPUBuffer The GPUBuffer to copy from.
sourceOffset GPUSize64 Offset in bytes into source to begin copying from.
destination GPUBuffer The GPUBuffer to copy to.
destinationOffset GPUSize64 Offset in bytes into destination to place the copied data.
size GPUSize64 Bytes to copy.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If size is undefined, set it to source.sizesourceOffset.

  3. If any of the following conditions are unsatisfied, invalidate this and return.

  4. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Copy size bytes of source, beginning at sourceOffset, into destination, beginning at destinationOffset.

clearBuffer(buffer, offset, size)

Encode a command into the GPUCommandEncoder that fills a sub-region of a GPUBuffer with zeros.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.clearBuffer(buffer, offset, size) method.
Parameter Type Nullable Optional Description
buffer GPUBuffer The GPUBuffer to clear.
offset GPUSize64 Offset in bytes into buffer where the sub-region to clear begins.
size GPUSize64 Size in bytes of the sub-region to clear. Defaults to the size of the buffer minus offset.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If size is missing, set size to max(0, buffer.size - offset).

  3. If any of the following conditions are unsatisfied, invalidate this and return.

  4. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Set size bytes of buffer to 0 starting at offset.

13.5. Texel Copy Commands

copyBufferToTexture(source, destination, copySize)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of a GPUBuffer to a sub-region of one or multiple continuous texture subresources.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyBufferToTexture(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUTexelCopyBufferInfo Combined with copySize, defines the region of the source buffer.
destination GPUTexelCopyTextureInfo Combined with copySize, defines the region of the destination texture subresource.
copySize GPUExtent3D

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin3D shape(destination.origin).

  2. ? validate GPUExtent3D shape(copySize).

  3. Issue the subsequent steps on the Device timeline of this.[[device]]:

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let aligned be true.

  3. Let dataLength be source.buffer.size.

  4. If any of the following conditions are unsatisfied, invalidate this and return.

  5. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let blockWidth be the texel block width of destination.texture.

  2. Let blockHeight be the texel block height of destination.texture.

  3. Let dstOrigin be destination.origin.

  4. Let dstBlockOriginX be (dstOrigin.x ÷ blockWidth).

  5. Let dstBlockOriginY be (dstOrigin.y ÷ blockHeight).

  6. Let blockColumns be (copySize.width ÷ blockWidth).

  7. Let blockRows be (copySize.height ÷ blockHeight).

  8. Assert that dstBlockOriginX, dstBlockOriginY, blockColumns, and blockRows are integers.

  9. For each z in the range [0, copySize.depthOrArrayLayers − 1]:

    1. Let dstSubregion be texture copy sub-region (z + dstOrigin.z) of destination.

    2. For each y in the range [0, blockRows − 1]:

      1. For each x in the range [0, blockColumns − 1]:

        1. Let blockOffset be the texel block byte offset of source for (x, y, z) of destination.texture.

        2. Set texel block (dstBlockOriginX + x, dstBlockOriginY + y) of dstSubregion to be an equivalent texel representation to the texel block described by source.buffer at offset blockOffset.

copyTextureToBuffer(source, destination, copySize)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of one or multiple continuous texture subresources to a sub-region of a GPUBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyTextureToBuffer(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUTexelCopyTextureInfo Combined with copySize, defines the region of the source texture subresources.
destination GPUTexelCopyBufferInfo Combined with copySize, defines the region of the destination buffer.
copySize GPUExtent3D

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin3D shape(source.origin).

  2. ? validate GPUExtent3D shape(copySize).

  3. Issue the subsequent steps on the Device timeline of this.[[device]]:

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let aligned be true.

  3. Let dataLength be destination.buffer.size.

  4. If any of the following conditions are unsatisfied, invalidate this and return.

  5. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let blockWidth be the texel block width of source.texture.

  2. Let blockHeight be the texel block height of source.texture.

  3. Let srcOrigin be source.origin.

  4. Let srcBlockOriginX be (srcOrigin.x ÷ blockWidth).

  5. Let srcBlockOriginY be (srcOrigin.y ÷ blockHeight).

  6. Let blockColumns be (copySize.width ÷ blockWidth).

  7. Let blockRows be (copySize.height ÷ blockHeight).

  8. Assert that srcBlockOriginX, srcBlockOriginY, blockColumns, and blockRows are integers.

  9. For each z in the range [0, copySize.depthOrArrayLayers − 1]:

    1. Let srcSubregion be texture copy sub-region (z + srcOrigin.z) of source.

    2. For each y in the range [0, blockRows − 1]:

      1. For each x in the range [0, blockColumns − 1]:

        1. Let blockOffset be the texel block byte offset of destination for (x, y, z) of source.texture.

        2. Set destination.buffer at offset blockOffset to be an equivalent texel representation to texel block (srcBlockOriginX + x, srcBlockOriginY + y) of srcSubregion.

copyTextureToTexture(source, destination, copySize)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of one or multiple contiguous texture subresources to another sub-region of one or multiple continuous texture subresources.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyTextureToTexture(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUTexelCopyTextureInfo Combined with copySize, defines the region of the source texture subresources.
destination GPUTexelCopyTextureInfo Combined with copySize, defines the region of the destination texture subresources.
copySize GPUExtent3D

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin3D shape(source.origin).

  2. ? validate GPUOrigin3D shape(destination.origin).

  3. ? validate GPUExtent3D shape(copySize).

  4. Issue the subsequent steps on the Device timeline of this.[[device]]:

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let blockWidth be the texel block width of source.texture.

  2. Let blockHeight be the texel block height of source.texture.

  3. Let srcOrigin be source.origin.

  4. Let srcBlockOriginX be (srcOrigin.x ÷ blockWidth).

  5. Let srcBlockOriginY be (srcOrigin.y ÷ blockHeight).

  6. Let dstOrigin be destination.origin.

  7. Let dstBlockOriginX be (dstOrigin.x ÷ blockWidth).

  8. Let dstBlockOriginY be (dstOrigin.y ÷ blockHeight).

  9. Let blockColumns be (copySize.width ÷ blockWidth).

  10. Let blockRows be (copySize.height ÷ blockHeight).

  11. Assert that srcBlockOriginX, srcBlockOriginY, dstBlockOriginX, dstBlockOriginY, blockColumns, and blockRows are integers.

  12. For each z in the range [0, copySize.depthOrArrayLayers − 1]:

    1. Let srcSubregion be texture copy sub-region (z + srcOrigin.z) of source.

    2. Let dstSubregion be texture copy sub-region (z + dstOrigin.z) of destination.

    3. For each y in the range [0, blockRows − 1]:

      1. For each x in the range [0, blockColumns − 1]:

        1. Set texel block (dstBlockOriginX + x, dstBlockOriginY + y) of dstSubregion to be an equivalent texel representation to texel block (srcBlockOriginX + x, srcBlockOriginY + y) of srcSubregion.

13.6. Queries

resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset)

Resolves query results from a GPUQuerySet out into a range of a GPUBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset) method.
Parameter Type Nullable Optional Description
querySet GPUQuerySet
firstQuery GPUSize32
queryCount GPUSize32
destination GPUBuffer
destinationOffset GPUSize64

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

    • querySet is valid to use with this.

    • destination is valid to use with this.

    • destination.usage contains QUERY_RESOLVE.

    • firstQuery < the number of queries in querySet.

    • (firstQuery + queryCount) ≤ the number of queries in querySet.

    • destinationOffset is a multiple of 256.

    • destinationOffset + 8 × queryCountdestination.size.

  3. Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.

Queue timeline steps:
  1. Let queryIndex be firstQuery.

  2. Let offset be destinationOffset.

  3. While queryIndex < firstQuery + queryCount:

    1. Set 8 bytes of destination, beginning at offset, to be the value of querySet at queryIndex.

    2. Set queryIndex to be queryIndex + 1.

    3. Set offset to be offset + 8.

13.7. Finalization

A GPUCommandBuffer containing the commands recorded by the GPUCommandEncoder can be created by calling finish(). Once finish() has been called the command encoder can no longer be used.

finish(descriptor)

Completes recording of the commands sequence and returns a corresponding GPUCommandBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.finish(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUCommandBufferDescriptor

Returns: GPUCommandBuffer

Content timeline steps:

  1. Let commandBuffer be a new GPUCommandBuffer.

  2. Issue the finish steps on the Device timeline of this.[[device]].

  3. Return commandBuffer.

Device timeline finish steps:
  1. Let validationSucceeded be true if all of the following requirements are met, and false otherwise.

  2. Set this.[[state]] to "ended".

  3. If validationSucceeded is false, then:

    1. Generate a validation error.

    2. Return an invalidated GPUCommandBuffer.

  4. Set commandBuffer.[[command_list]] to this.[[commands]].

14. Programmable Passes

interface mixin GPUBindingCommandsMixin {
    undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
        optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
        [AllowShared] Uint32Array dynamicOffsetsData,
        GPUSize64 dynamicOffsetsDataStart,
        GPUSize32 dynamicOffsetsDataLength);
};

GPUBindingCommandsMixin assumes the presence of GPUObjectBase and GPUCommandsMixin members on the same object. It must only be included by interfaces which also include those mixins.

GPUBindingCommandsMixin has the following device timeline properties:

[[bind_groups]], of type ordered map<GPUIndex32, GPUBindGroup>, initially empty

The current GPUBindGroup for each index.

[[dynamic_offsets]], of type ordered map<GPUIndex32, list<GPUBufferDynamicOffset>>, initally empty

The current dynamic offsets for each [[bind_groups]] entry.

14.1. Bind Groups

setBindGroup() has two overloads:

setBindGroup(index, bindGroup, dynamicOffsets)

Sets the current GPUBindGroup for the given index.

Called on: GPUBindingCommandsMixin this.

Arguments:

index, of type GPUIndex32, non-nullable, required

The index to set the bind group at.

bindGroup, of type GPUBindGroup, nullable, required

Bind group to use for subsequent render or compute commands.

dynamicOffsets, of type sequence<GPUBufferDynamicOffset>, non-nullable, defaulting to []

Array containing buffer offsets in bytes for each entry in bindGroup marked as buffer.hasDynamicOffset, ordered by GPUBindGroupLayoutEntry.binding. See note for additional details.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let dynamicOffsetCount be 0 if bindGroup is null, or bindGroup.[[layout]].[[dynamicOffsetCount]] if not.

  3. If any of the following requirements are unmet, invalidate this and return.

  4. If bindGroup is null:

    1. Remove this.[[bind_groups]][index].

    2. Remove this.[[dynamic_offsets]][index].

    Otherwise:

    1. If any of the following requirements are unmet, invalidate this and return.

    2. Set this.[[bind_groups]][index] to be bindGroup.

    3. Set this.[[dynamic_offsets]][index] to be a copy of dynamicOffsets.

    4. If this is a GPURenderCommandsMixin:

      1. For each bindGroup in this.[[bind_groups]], merge bindGroup.[[usedResources]] into this.[[usage scope]]

setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength)

Sets the current GPUBindGroup for the given index, specifying dynamic offsets as a subset of a Uint32Array.

Called on: GPUBindingCommandsMixin this.

Arguments:

Arguments for the GPUBindingCommandsMixin.setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength) method.
Parameter Type Nullable Optional Description
index GPUIndex32 The index to set the bind group at.
bindGroup GPUBindGroup? Bind group to use for subsequent render or compute commands.
dynamicOffsetsData Uint32Array Array containing buffer offsets in bytes for each entry in bindGroup marked as buffer.hasDynamicOffset, ordered by GPUBindGroupLayoutEntry.binding. See note for additional details.
dynamicOffsetsDataStart GPUSize64 Offset in elements into dynamicOffsetsData where the buffer offset data begins.
dynamicOffsetsDataLength GPUSize32 Number of buffer offsets to read from dynamicOffsetsData.

Returns: undefined

Content timeline steps:

  1. If any of the following requirements are unmet, throw a RangeError and return.

    • dynamicOffsetsDataStart must be ≥ 0.

    • dynamicOffsetsDataStart + dynamicOffsetsDataLength must be ≤ dynamicOffsetsData.length.

  2. Let dynamicOffsets be a list containing the range, starting at index dynamicOffsetsDataStart, of dynamicOffsetsDataLength elements of a copy of dynamicOffsetsData.

  3. Call this.setBindGroup(index, bindGroup, dynamicOffsets).

NOTE:
Dynamic offset are applied in GPUBindGroupLayoutEntry.binding order.

This means that if dynamic bindings is the list of each GPUBindGroupLayoutEntry in the GPUBindGroupLayout with buffer?.hasDynamicOffset set to true, sorted by GPUBindGroupLayoutEntry.binding, then dynamic offset[i], as supplied to setBindGroup(), will correspond to dynamic bindings[i].

For a GPUBindGroupLayout created with the following call:
// Note the bindings are listed out-of-order in this array, but it
// doesn’t matter because they will be sorted by binding index.
let layout = gpuDevice.createBindGroupLayout({
    entries: [{
        binding: 1,
        buffer: {},
    }, {
        binding: 2,
        buffer: { dynamicOffset: true },
    }, {
        binding: 0,
        buffer: { dynamicOffset: true },
    }]
});

Used by a GPUBindGroup created with the following call:

// Like above, the array order doesn’t matter here.
// It doesn’t even need to match the order used in the layout.
let bindGroup = gpuDevice.createBindGroup({
    layout: layout,
    entries: [{
        binding: 1,
        resource: { buffer: bufferA, offset: 256 },
    }, {
        binding: 2,
        resource: { buffer: bufferB, offset: 512 },
    }, {
        binding: 0,
        resource: { buffer: bufferC },
    }]
});

And bound with the following call:

pass.setBindGroup(0, bindGroup, [1024, 2048]);

The following buffer offsets will be applied:

Binding Buffer Offset
0 bufferC 1024 (Dynamic)
1 bufferA 256 (Static)
2 bufferB 2560 (Static + Dynamic)
To Iterate over each dynamic binding offset in a given GPUBindGroup bindGroup with a given list of steps to be executed for each dynamic offset, run the following device timeline steps:
  1. Let dynamicOffsetIndex be 0.

  2. Let layout be bindGroup.[[layout]].

  3. For each GPUBindGroupEntry entry in bindGroup.[[entries]] ordered in increasing values of entry.binding:

    1. Let bindingDescriptor be the GPUBindGroupLayoutEntry at layout.[[entryMap]][entry.binding]:

    2. If bindingDescriptor.buffer?.hasDynamicOffset is true:

      1. Let bufferBinding be get as buffer binding(entry.resource).

      2. Let bufferLayout be bindingDescriptor.buffer.

      3. Call steps with bufferBinding, bufferLayout, and dynamicOffsetIndex.

      4. Let dynamicOffsetIndex be dynamicOffsetIndex + 1

Validate encoder bind groups(encoder, pipeline)

Arguments:

GPUBindingCommandsMixin encoder

Encoder whose bind groups are being validated.

GPUPipelineBase pipeline

Pipeline to validate encoders bind groups are compatible with.

Device timeline steps:

  1. If any of the following conditions are unsatisfied, return false:

Otherwise return true.

Encoder bind groups alias a writable resource(encoder, pipeline) if any writable buffer binding range overlaps with any other binding range of the same buffer, or any writable texture binding overlaps in texture subresources with any other texture binding (which may use the same or a different GPUTextureView object).

Note: This algorithm limits the use of the usage scope storage exception.

Arguments:

GPUBindingCommandsMixin encoder

Encoder whose bind groups are being validated.

GPUPipelineBase pipeline

Pipeline to validate encoders bind groups are compatible with.

Device timeline steps:

  1. For each stage in [VERTEX, FRAGMENT, COMPUTE]:

    1. Let bufferBindings be a list of (GPUBufferBinding, boolean) pairs, where the latter indicates whether the resource was used as writable.

    2. Let textureViews be a list of (GPUTextureView, boolean) pairs, where the latter indicates whether the resource was used as writable.

    3. For each pair of (GPUIndex32 bindGroupIndex, GPUBindGroupLayout bindGroupLayout) in pipeline.[[layout]].[[bindGroupLayouts]]:

      1. Let bindGroup be encoder.[[bind_groups]][bindGroupIndex].

      2. Let bindGroupLayoutEntries be bindGroupLayout.[[descriptor]].entries.

      3. Let bufferRanges be the bound buffer ranges of bindGroup, given dynamic offsets encoder.[[dynamic_offsets]][bindGroupIndex]

      4. For each (GPUBindGroupLayoutEntry bindGroupLayoutEntry, GPUBufferBinding resource) in bufferRanges, in which bindGroupLayoutEntry.visibility contains stage:

        1. Let resourceWritable be (bindGroupLayoutEntry.buffer.type == "storage").

        2. For each pair (GPUBufferBinding pastResource, boolean pastResourceWritable) in bufferBindings:

          1. If (resourceWritable or pastResourceWritable) is true, and pastResource and resource are buffer-binding-aliasing, return true.

        3. Append (resource, resourceWritable) to bufferBindings.

      5. For each GPUBindGroupLayoutEntry bindGroupLayoutEntry in bindGroupLayoutEntries, and corresponding GPUTextureView resource in bindGroup, in which bindGroupLayoutEntry.visibility contains stage:

        1. If bindGroupLayoutEntry.storageTexture is not provided, continue.

        2. Let resourceWritable be whether bindGroupLayoutEntry.storageTexture.access is a writable access mode.

        3. For each pair (GPUTextureView pastResource, boolean pastResourceWritable) in textureViews,

          1. If (resourceWritable or pastResourceWritable) is true, and pastResource and resource is texture-view-aliasing, return true.

        4. Append (resource, resourceWritable) to textureViews.

  2. Return false.

Note: Implementations are strongly encouraged to optimize this algorithm.

15. Debug Markers

GPUDebugCommandsMixin provides methods to apply debug labels to groups of commands or insert a single label into the command sequence.

Debug groups can be nested to create a hierarchy of labeled commands, and must be well-balanced.

Like object labels, these labels have no required behavior, but may be shown in error messages and browser developer tools, and may be passed to native API backends.

interface mixin GPUDebugCommandsMixin {
    undefined pushDebugGroup(USVString groupLabel);
    undefined popDebugGroup();
    undefined insertDebugMarker(USVString markerLabel);
};

GPUDebugCommandsMixin assumes the presence of GPUObjectBase and GPUCommandsMixin members on the same object. It must only be included by interfaces which also include those mixins.

GPUDebugCommandsMixin has the following device timeline properties:

[[debug_group_stack]], of type stack<USVString>

A stack of active debug group labels.

GPUDebugCommandsMixin has the following methods:

pushDebugGroup(groupLabel)

Begins a labeled debug group containing subsequent commands.

Called on: GPUDebugCommandsMixin this.

Arguments:

Arguments for the GPUDebugCommandsMixin.pushDebugGroup(groupLabel) method.
Parameter Type Nullable Optional Description
groupLabel USVString The label for the command group.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Push groupLabel onto this.[[debug_group_stack]].

popDebugGroup()

Ends the labeled debug group most recently started by pushDebugGroup().

Called on: GPUDebugCommandsMixin this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following requirements are unmet, invalidate this and return.

  3. Pop an entry off of this.[[debug_group_stack]].

insertDebugMarker(markerLabel)

Marks a point in a stream of commands with a label.

Called on: GPUDebugCommandsMixin this.

Arguments:

Arguments for the GPUDebugCommandsMixin.insertDebugMarker(markerLabel) method.
Parameter Type Nullable Optional Description
markerLabel USVString The label to insert.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

16. Compute Passes

16.1. GPUComputePassEncoder

[Exposed=(Window, Worker), SecureContext]
interface GPUComputePassEncoder {
    undefined setPipeline(GPUComputePipeline pipeline);
    undefined dispatchWorkgroups(GPUSize32 workgroupCountX, optional GPUSize32 workgroupCountY = 1, optional GPUSize32 workgroupCountZ = 1);
    undefined dispatchWorkgroupsIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    undefined end();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUCommandsMixin;
GPUComputePassEncoder includes GPUDebugCommandsMixin;
GPUComputePassEncoder includes GPUBindingCommandsMixin;

GPUComputePassEncoder has the following device timeline properties:

[[command_encoder]], of type GPUCommandEncoder, readonly

The GPUCommandEncoder that created this compute pass encoder.

[[endTimestampWrite]], of type GPU command?, readonly, defaulting to null

GPU command, if any, writing a timestamp when the pass ends.

[[pipeline]], of type GPUComputePipeline, initially null

The current GPUComputePipeline.

16.1.1. Compute Pass Encoder Creation

dictionary GPUComputePassTimestampWrites {
    required GPUQuerySet querySet;
    GPUSize32 beginningOfPassWriteIndex;
    GPUSize32 endOfPassWriteIndex;
};
querySet, of type GPUQuerySet

The GPUQuerySet, of type "timestamp", that the query results will be written to.

beginningOfPassWriteIndex, of type GPUSize32

If defined, indicates the query index in querySet into which the timestamp at the beginning of the compute pass will be written.

endOfPassWriteIndex, of type GPUSize32

If defined, indicates the query index in querySet into which the timestamp at the end of the compute pass will be written.

Note: Timestamp query values are written in nanoseconds, but how the value is determined is implementation-defined and may not increase monotonically. See § 20.4 Timestamp Query for details.

dictionary GPUComputePassDescriptor
         : GPUObjectDescriptorBase {
    GPUComputePassTimestampWrites timestampWrites;
};
timestampWrites, of type GPUComputePassTimestampWrites

Defines which timestamp values will be written for this pass, and where to write them to.

16.1.2. Dispatch

setPipeline(pipeline)

Sets the current GPUComputePipeline.

Called on: GPUComputePassEncoder this.

Arguments:

Arguments for the GPUComputePassEncoder.setPipeline(pipeline) method.
Parameter Type Nullable Optional Description
pipeline GPUComputePipeline The compute pipeline to use for subsequent dispatch commands.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Set this.[[pipeline]] to be pipeline.

dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ)

Dispatch work to be performed with the current GPUComputePipeline. See § 23.1 Computing for the detailed specification.

Called on: GPUComputePassEncoder this.

Arguments:

Arguments for the GPUComputePassEncoder.dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ) method.
Parameter Type Nullable Optional Description
workgroupCountX GPUSize32 X dimension of the grid of workgroups to dispatch.
workgroupCountY GPUSize32 Y dimension of the grid of workgroups to dispatch.
workgroupCountZ GPUSize32 Z dimension of the grid of workgroups to dispatch.
NOTE:
The x, y, and z values passed to dispatchWorkgroups() and dispatchWorkgroupsIndirect() are the number of workgroups to dispatch for each dimension, not the number of shader invocations to perform across each dimension. This matches the behavior of modern native GPU APIs, but differs from the behavior of OpenCL.

This means that if a GPUShaderModule defines an entry point with @workgroup_size(4, 4), and work is dispatched to it with the call computePass.dispatchWorkgroups(8, 8); the entry point will be invoked 1024 times total: Dispatching a 4x4 workgroup 8 times along both the X and Y axes. (4*4*8*8=1024)

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let usageScope be an empty usage scope.

  3. For each bindGroup in this.[[bind_groups]], merge bindGroup.[[usedResources]] into this.[[usage scope]]

  4. If any of the following conditions are unsatisfied, invalidate this and return.

  5. Let bindingState be a snapshot of this’s current state.

  6. Enqueue a command on this which issues the subsequent steps on the Queue timeline.

Queue timeline steps:
  1. Execute a grid of workgroups with dimensions [workgroupCountX, workgroupCountY, workgroupCountZ] with bindingState.[[pipeline]] using bindingState.[[bind_groups]].

dispatchWorkgroupsIndirect(indirectBuffer, indirectOffset)

Dispatch work to be performed with the current GPUComputePipeline using parameters read from a GPUBuffer. See § 23.1 Computing for the detailed specification.

The indirect dispatch parameters encoded in the buffer must be a tightly packed block of three 32-bit unsigned integer values (12 bytes total), given in the same order as the arguments for dispatchWorkgroups(). For example:

let dispatchIndirectParameters = new Uint32Array(3);
dispatchIndirectParameters[0] = workgroupCountX;
dispatchIndirectParameters[1] = workgroupCountY;
dispatchIndirectParameters[2] = workgroupCountZ;
Called on: GPUComputePassEncoder this.

Arguments:

Arguments for the GPUComputePassEncoder.dispatchWorkgroupsIndirect(indirectBuffer, indirectOffset) method.
Parameter Type Nullable Optional Description
indirectBuffer GPUBuffer Buffer containing the indirect dispatch parameters.
indirectOffset GPUSize64 Offset in bytes into indirectBuffer where the dispatch data begins.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let usageScope be an empty usage scope.

  3. For each bindGroup in this.[[bind_groups]], merge bindGroup.[[usedResources]] into this.[[usage scope]]

  4. Add indirectBuffer to usageScope with usage input.

  5. If any of the following conditions are unsatisfied, invalidate this and return.

  6. Let bindingState be a snapshot of this’s current state.

  7. Enqueue a command on this which issues the subsequent steps on the Queue timeline.

Queue timeline steps:
  1. Let workgroupCountX be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.

  2. Let workgroupCountY be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 4) bytes.

  3. Let workgroupCountZ be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 8) bytes.

  4. If workgroupCountX, workgroupCountY, or workgroupCountZ is greater than this.device.limits.maxComputeWorkgroupsPerDimension, return.

  5. Execute a grid of workgroups with dimensions [workgroupCountX, workgroupCountY, workgroupCountZ] with bindingState.[[pipeline]] using bindingState.[[bind_groups]].

16.1.3. Finalization

The compute pass encoder can be ended by calling end() once the user has finished recording commands for the pass. Once end() has been called the compute pass encoder can no longer be used.

end()

Completes recording of the compute pass commands sequence.

Called on: GPUComputePassEncoder this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Let parentEncoder be this.[[command_encoder]].

  2. If any of the following requirements are unmet, generate a validation error and return.

  3. Set this.[[state]] to "ended".

  4. Set parentEncoder.[[state]] to "open".

  5. If any of the following requirements are unmet, invalidate parentEncoder and return.

  6. Extend parentEncoder.[[commands]] with this.[[commands]].

  7. If this.[[endTimestampWrite]] is not null:

    1. Extend parentEncoder.[[commands]] with this.[[endTimestampWrite]].

17. Render Passes

17.1. GPURenderPassEncoder

[Exposed=(Window, Worker), SecureContext]
interface GPURenderPassEncoder {
    undefined setViewport(float x, float y,
        float width, float height,
        float minDepth, float maxDepth);

    undefined setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    undefined setBlendConstant(GPUColor color);
    undefined setStencilReference(GPUStencilValue reference);

    undefined beginOcclusionQuery(GPUSize32 queryIndex);
    undefined endOcclusionQuery();

    undefined executeBundles(sequence<GPURenderBundle> bundles);
    undefined end();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUCommandsMixin;
GPURenderPassEncoder includes GPUDebugCommandsMixin;
GPURenderPassEncoder includes GPUBindingCommandsMixin;
GPURenderPassEncoder includes GPURenderCommandsMixin;

GPURenderPassEncoder has the following device timeline properties:

[[command_encoder]], of type GPUCommandEncoder, readonly

The GPUCommandEncoder that created this render pass encoder.

[[attachment_size]], readonly

Set to the following extents:

  • width, height = the dimensions of the pass’s render attachments

[[occlusion_query_set]], of type GPUQuerySet, readonly

The GPUQuerySet to store occlusion query results for the pass, which is initialized with GPURenderPassDescriptor.occlusionQuerySet at pass creation time.

[[endTimestampWrite]], of type GPU command?, readonly, defaulting to null

GPU command, if any, writing a timestamp when the pass ends.

[[maxDrawCount]] of type GPUSize64, readonly

The maximum number of draws allowed in this pass.

[[occlusion_query_active]], of type boolean

Whether the pass’s [[occlusion_query_set]] is being written.

When executing encoded render pass commands as part of a GPUCommandBuffer, an internal RenderState object is used to track the current state required for rendering.

RenderState has the following queue timeline properties:

[[occlusionQueryIndex]], of type GPUSize32

The index into [[occlusion_query_set]] at which to store the occlusion query results.

[[viewport]]

Current viewport rectangle and depth range. Initially set to the following values:

  • x, y = 0.0, 0.0

  • width, height = the dimensions of the pass’s render targets

  • minDepth, maxDepth = 0.0, 1.0

[[scissorRect]]

Current scissor rectangle. Initially set to the following values:

  • x, y = 0, 0

  • width, height = the dimensions of the pass’s render targets

[[blendConstant]], of type GPUColor

Current blend constant value, initially [0, 0, 0, 0].

[[stencilReference]], of type GPUStencilValue

Current stencil reference value, initially 0.

[[colorAttachments]], of type sequence<GPURenderPassColorAttachment?>

The color attachments and state for this render pass.

[[depthStencilAttachment]], of type GPURenderPassDepthStencilAttachment?

The depth/stencil attachment and state for this render pass.

Render passes also have framebuffer memory, which contains the texel data associated with each attachment that is written into by draw commands and read from for blending and depth/stencil testing.

Note: Depending on the GPU hardware, framebuffer memory may be the memory allocated by the attachment textures or may be a separate area of memory that the texture data is copied to and from, such as with tile-based architectures.

17.1.1. Render Pass Encoder Creation

dictionary GPURenderPassTimestampWrites {
    required GPUQuerySet querySet;
    GPUSize32 beginningOfPassWriteIndex;
    GPUSize32 endOfPassWriteIndex;
};
querySet, of type GPUQuerySet

The GPUQuerySet, of type "timestamp", that the query results will be written to.

beginningOfPassWriteIndex, of type GPUSize32

If defined, indicates the query index in querySet into which the timestamp at the beginning of the render pass will be written.

endOfPassWriteIndex, of type GPUSize32

If defined, indicates the query index in querySet into which the timestamp at the end of the render pass will be written.

Note: Timestamp query values are written in nanoseconds, but how the value is determined is implementation-defined and may not increase monotonically. See § 20.4 Timestamp Query for details.

dictionary GPURenderPassDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachment?> colorAttachments;
    GPURenderPassDepthStencilAttachment depthStencilAttachment;
    GPUQuerySet occlusionQuerySet;
    GPURenderPassTimestampWrites timestampWrites;
    GPUSize64 maxDrawCount = 50000000;
};
colorAttachments, of type sequence<GPURenderPassColorAttachment?>

The set of GPURenderPassColorAttachment values in this sequence defines which color attachments will be output to when executing this render pass.

Due to usage compatibility, no color attachment may alias another attachment or any resource used inside the render pass.

depthStencilAttachment, of type GPURenderPassDepthStencilAttachment

The GPURenderPassDepthStencilAttachment value that defines the depth/stencil attachment that will be output to and tested against when executing this render pass.

Due to usage compatibility, no writable depth/stencil attachment may alias another attachment or any resource used inside the render pass.

occlusionQuerySet, of type GPUQuerySet

The GPUQuerySet value defines where the occlusion query results will be stored for this pass.

timestampWrites, of type GPURenderPassTimestampWrites

Defines which timestamp values will be written for this pass, and where to write them to.

maxDrawCount, of type GPUSize64, defaulting to 50000000

The maximum number of draw calls that will be done in the render pass. Used by some implementations to size work injected before the render pass. Keeping the default value is a good default, unless it is known that more draw calls will be done.

Valid Usage

Given a GPUDevice device and GPURenderPassDescriptor this, the following validation rules apply:

  1. this.colorAttachments.size must be ≤ device.[[limits]].maxColorAttachments.

  2. For each non-null colorAttachment in this.colorAttachments:

    1. colorAttachment.view must be valid to use with device.

    2. If colorAttachment.resolveTarget is provided:

      1. colorAttachment.resolveTarget must be valid to use with device.

    3. colorAttachment must meet the GPURenderPassColorAttachment Valid Usage rules.

  3. If this.depthStencilAttachment is provided:

    1. this.depthStencilAttachment.view must be valid to use with device.

    2. this.depthStencilAttachment must meet the GPURenderPassDepthStencilAttachment Valid Usage rules.

  4. There must exist at least one attachment, either:

  5. Validating GPURenderPassDescriptor’s color attachment bytes per sample(device, this.colorAttachments) succeeds.

  6. All views in non-null members of this.colorAttachments, and this.depthStencilAttachment.view if present, must have equal sampleCounts.

  7. For each view in non-null members of this.colorAttachments and this.depthStencilAttachment.view, if present, the [[renderExtent]] must match.

  8. If this.occlusionQuerySet is provided:

    1. this.occlusionQuerySet must be valid to use with device.

    2. this.occlusionQuerySet.type must be occlusion.

  9. If this.timestampWrites is provided:

Validating GPURenderPassDescriptor’s color attachment bytes per sample(device, colorAttachments)

Arguments:

Device timeline steps:

  1. Let formats be an empty list<GPUTextureFormat?>

  2. For each colorAttachment in colorAttachments:

    1. If colorAttachment is undefined, continue.

    2. Append colorAttachment.view.[[descriptor]].format to formats.

  3. Calculating color attachment bytes per sample(formats) must be ≤ device.[[limits]].maxColorAttachmentBytesPerSample.

17.1.1.1. Color Attachments
dictionary GPURenderPassColorAttachment {
    required (GPUTexture or GPUTextureView) view;
    GPUIntegerCoordinate depthSlice;
    (GPUTexture or GPUTextureView) resolveTarget;

    GPUColor clearValue;
    required GPULoadOp loadOp;
    required GPUStoreOp storeOp;
};
view, of type (GPUTexture or GPUTextureView)

Describes the texture subresource that will be output to for this color attachment. The subresource is determined by calling get as texture view(view).

depthSlice, of type GPUIntegerCoordinate

Indicates the depth slice index of "3d" view that will be output to for this color attachment.

resolveTarget, of type (GPUTexture or GPUTextureView)

Describes the texture subresource that will receive the resolved output for this color attachment if view is multisampled. The subresource is determined by calling get as texture view(resolveTarget).

clearValue, of type GPUColor

Indicates the value to clear view to prior to executing the render pass. If not provided, defaults to {r: 0, g: 0, b: 0, a: 0}. Ignored if loadOp is not "clear".

The components of clearValue are all double values. They are converted to a texel value of texture format matching the render attachment. If conversion fails, a validation error is generated.

loadOp, of type GPULoadOp

Indicates the load operation to perform on view prior to executing the render pass.

Note: It is recommended to prefer clearing; see "clear" for details.

storeOp, of type GPUStoreOp

The store operation to perform on view after executing the render pass.

GPURenderPassColorAttachment Valid Usage

Given a GPURenderPassColorAttachment this:

  1. Let renderViewDescriptor be this.view.[[descriptor]].

  2. Let renderTexture be this.view.[[texture]].

  3. All of the requirements in the following steps must be met.

    1. renderViewDescriptor.format must be a color renderable format.

    2. this.view must be a renderable texture view.

    3. If renderViewDescriptor.dimension is "3d":

      1. this.depthSlice must be provided and must be < the depthOrArrayLayers of the logical miplevel-specific texture extent of the renderTexture subresource at mipmap level renderViewDescriptor.baseMipLevel.

      Otherwise:

      1. this.depthSlice must not be provided.

    4. If this.loadOp is "clear":

      1. Converting the IDL value this.clearValue to a texel value of texture format renderViewDescriptor.format must not throw a TypeError.

        Note: An error is not thrown if the value is out-of-range for the format but in-range for the corresponding WGSL primitive type (f32, i32, or u32).

    5. If this.resolveTarget is provided:

      1. Let resolveViewDescriptor be this.resolveTarget.[[descriptor]].

      2. Let resolveTexture be this.resolveTarget.[[texture]].

      3. renderTexture.sampleCount must be > 1.

      4. resolveTexture.sampleCount must be 1.

      5. this.resolveTarget must be a non-3d renderable texture view.

      6. this.resolveTarget.[[renderExtent]] and this.view.[[renderExtent]] must match.

      7. resolveViewDescriptor.format must equal renderViewDescriptor.format.

      8. resolveTexture.format must equal renderTexture.format.

      9. resolveViewDescriptor.format must support resolve according to § 26.1.1 Plain color formats.

A GPUTextureView view is a renderable texture view if the all of the requirements in the following device timeline steps are met:
  1. Let descriptor be view.[[descriptor]].

  2. descriptor.usage must contain RENDER_ATTACHMENT.

  3. descriptor.dimension must be "2d" or "2d-array" or "3d".

  4. descriptor.mipLevelCount must be 1.

  5. descriptor.arrayLayerCount must be 1.

  6. descriptor.aspect must refer to all aspects of view.[[texture]].

Calculating color attachment bytes per sample(formats)

Arguments:

Returns: GPUSize32

  1. Let total be 0.

  2. For each non-null format in formats

    1. Assert: format is a color renderable format.

    2. Let renderTargetPixelByteCost be the render target pixel byte cost of format.

    3. Let renderTargetComponentAlignment be the render target component alignment of format.

    4. Round total up to the smallest multiple of renderTargetComponentAlignment greater than or equal to total.

    5. Add renderTargetPixelByteCost to total.

  3. Return total.

17.1.1.2. Depth/Stencil Attachments
dictionary GPURenderPassDepthStencilAttachment {
    required (GPUTexture or GPUTextureView) view;

    float depthClearValue;
    GPULoadOp depthLoadOp;
    GPUStoreOp depthStoreOp;
    boolean depthReadOnly = false;

    GPUStencilValue stencilClearValue = 0;
    GPULoadOp stencilLoadOp;
    GPUStoreOp stencilStoreOp;
    boolean stencilReadOnly = false;
};
view, of type (GPUTexture or GPUTextureView)

Describes the texture subresource that will be output to and read from for this depth/stencil attachment. The subresource is determined by calling get as texture view(view).

depthClearValue, of type float

Indicates the value to clear view’s depth component to prior to executing the render pass. Ignored if depthLoadOp is not "clear". Must be between 0.0 and 1.0, inclusive.

depthLoadOp, of type GPULoadOp

Indicates the load operation to perform on view’s depth component prior to executing the render pass.

Note: It is recommended to prefer clearing; see "clear" for details.

depthStoreOp, of type GPUStoreOp

The store operation to perform on view’s depth component after executing the render pass.

depthReadOnly, of type boolean, defaulting to false

Indicates that the depth component of view is read only.

stencilClearValue, of type GPUStencilValue, defaulting to 0

Indicates the value to clear view’s stencil component to prior to executing the render pass. Ignored if stencilLoadOp is not "clear".

The value will be converted to the type of the stencil aspect of view by taking the same number of LSBs as the number of bits in the stencil aspect of one texel of view.

stencilLoadOp, of type GPULoadOp

Indicates the load operation to perform on view’s stencil component prior to executing the render pass.

Note: It is recommended to prefer clearing; see "clear" for details.

stencilStoreOp, of type GPUStoreOp

The store operation to perform on view’s stencil component after executing the render pass.

stencilReadOnly, of type boolean, defaulting to false

Indicates that the stencil component of view is read only.

GPURenderPassDepthStencilAttachment Valid Usage

Given a GPURenderPassDepthStencilAttachment this, the following validation rules apply:

17.1.1.3. Load & Store Operations
enum GPULoadOp {
    "load",
    "clear",
};
"load"

Loads the existing value for this attachment into the render pass.

"clear"

Loads a clear value for this attachment into the render pass.

Note: On some GPU hardware (primarily mobile), "clear" is significantly cheaper because it avoids loading data from main memory into tile-local memory. On other GPU hardware, there isn’t a significant difference. As a result, it is recommended to use "clear" rather than "load" in cases where the initial value doesn’t matter (e.g. the render target will be cleared using a skybox).

enum GPUStoreOp {
    "store",
    "discard",
};
"store"

Stores the resulting value of the render pass for this attachment.

"discard"

Discards the resulting value of the render pass for this attachment.

Note: Discarded attachments behave as if they are cleared to zero, but implementations are not required to perform a clear at the end of the render pass. Implementations which do not explicitly clear discarded attachments at the end of a pass must lazily clear them prior to the reading the attachment contents, which occurs via sampling, copies, attaching to a later render pass with "load", displaying or reading back the canvas (get a copy of the image contents of a context), etc.

17.1.1.4. Render Pass Layout

GPURenderPassLayout declares the layout of the render targets of a GPURenderBundle. It is also used internally to describe GPURenderPassEncoder layouts and GPURenderPipeline layouts. It determines compatibility between render passes, render bundles, and render pipelines.

dictionary GPURenderPassLayout
         : GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat?> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};
colorFormats, of type sequence<GPUTextureFormat?>

A list of the GPUTextureFormats of the color attachments for this pass or bundle.

depthStencilFormat, of type GPUTextureFormat

The GPUTextureFormat of the depth/stencil attachment for this pass or bundle.

sampleCount, of type GPUSize32, defaulting to 1

Number of samples per pixel in the attachments for this pass or bundle.

Two GPURenderPassLayout values are equal if:
derive render targets layout from pass

Arguments:

Returns: GPURenderPassLayout

Device timeline steps:

  1. Let layout be a new GPURenderPassLayout object.

  2. For each colorAttachment in descriptor.colorAttachments:

    1. If colorAttachment is not null:

      1. Set layout.sampleCount to colorAttachment.view.[[texture]].sampleCount.

      2. Append colorAttachment.view.[[descriptor]].format to layout.colorFormats.

    2. Otherwise:

      1. Append null to layout.colorFormats.

  3. Let depthStencilAttachment be descriptor.depthStencilAttachment.

  4. If depthStencilAttachment is not null:

    1. Let view be depthStencilAttachment.view.

    2. Set layout.sampleCount to view.[[texture]].sampleCount.

    3. Set layout.depthStencilFormat to view.[[descriptor]].format.

  5. Return layout.

derive render targets layout from pipeline

Arguments:

Returns: GPURenderPassLayout

Device timeline steps:

  1. Let layout be a new GPURenderPassLayout object.

  2. Set layout.sampleCount to descriptor.multisample.count.

  3. If descriptor.depthStencil is provided:

    1. Set layout.depthStencilFormat to descriptor.depthStencil.format.

  4. If descriptor.fragment is provided:

    1. For each colorTarget in descriptor.fragment.targets:

      1. Append colorTarget.format to layout.colorFormats if colorTarget is not null, or append null otherwise.

  5. Return layout.

17.1.2. Finalization

The render pass encoder can be ended by calling end() once the user has finished recording commands for the pass. Once end() has been called the render pass encoder can no longer be used.

end()

Completes recording of the render pass commands sequence.

Called on: GPURenderPassEncoder this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Let parentEncoder be this.[[command_encoder]].

  2. If any of the following requirements are unmet, generate a validation error and return.

  3. Set this.[[state]] to "ended".

  4. Set parentEncoder.[[state]] to "open".

  5. If any of the following requirements are unmet, invalidate parentEncoder and return.

  6. Extend parentEncoder.[[commands]] with this.[[commands]].

  7. If this.[[endTimestampWrite]] is not null:

    1. Extend parentEncoder.[[commands]] with this.[[endTimestampWrite]].

  8. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. For each non-null colorAttachment in renderState.[[colorAttachments]]:

    1. Let colorView be colorAttachment.view.

    2. If colorView.[[descriptor]].dimension is:

      "3d"

      Let colorSubregion be colorAttachment.depthSlice of colorView.

      Otherwise

      Let colorSubregion be colorView.

    3. If colorAttachment.resolveTarget is not null:

      1. Resolve the multiple samples of every texel of colorSubregion to a single sample and copy to colorAttachment.resolveTarget.

    4. If colorAttachment.storeOp is:

      "store"

      Ensure the contents of the framebuffer memory associated with colorSubregion are stored in colorSubregion.

      "discard"

      Set every texel of colorSubregion to zero.

  2. Let depthStencilAttachment be renderState.[[depthStencilAttachment]].

  3. If depthStencilAttachment is not null:

    1. If depthStencilAttachment.depthStoreOp is:

      Not provided

      Assert that depthStencilAttachment.depthReadOnly is true and leave the depth subresource of depthStencilView unchanged.

      "store"

      Ensure the contents of the framebuffer memory associated with the depth subresource of depthStencilView are stored in depthStencilView.

      "discard"

      Set every texel in the depth subresource of depthStencilView to zero.

    2. If depthStencilAttachment.stencilStoreOp is:

      Not provided

      Assert that depthStencilAttachment.stencilReadOnly is true and leave the stencil subresource of depthStencilView unchanged.

      "store"

      Ensure the contents of the framebuffer memory associated with the stencil subresource of depthStencilView are stored in depthStencilView.

      "discard"

      Set every texel in the stencil subresource depthStencilView to zero.

  4. Let renderState be null.

Note: Discarded attachments behave as if they are cleared to zero, but implementations are not required to perform a clear at the end of the render pass. See the note on "discard" for additional details.

Note: Read-only depth-stencil attachments can be thought of as implicitly using the "store" operation, but since their content is unchanged during the render pass implementations don’t need to update the attachment. Validation that requires the store op to not be provided for read-only attachments is done in GPURenderPassDepthStencilAttachment Valid Usage.

17.2. GPURenderCommandsMixin

GPURenderCommandsMixin defines rendering commands common to GPURenderPassEncoder and GPURenderBundleEncoder.

interface mixin GPURenderCommandsMixin {
    undefined setPipeline(GPURenderPipeline pipeline);

    undefined setIndexBuffer(GPUBuffer buffer, GPUIndexFormat indexFormat, optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined setVertexBuffer(GPUIndex32 slot, GPUBuffer? buffer, optional GPUSize64 offset = 0, optional GPUSize64 size);

    undefined draw(GPUSize32 vertexCount, optional GPUSize32 instanceCount = 1,
        optional GPUSize32 firstVertex = 0, optional GPUSize32 firstInstance = 0);
    undefined drawIndexed(GPUSize32 indexCount, optional GPUSize32 instanceCount = 1,
        optional GPUSize32 firstIndex = 0,
        optional GPUSignedOffset32 baseVertex = 0,
        optional GPUSize32 firstInstance = 0);

    undefined drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    undefined drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

GPURenderCommandsMixin assumes the presence of GPUObjectBase, GPUCommandsMixin, and GPUBindingCommandsMixin members on the same object. It must only be included by interfaces which also include those mixins.

GPURenderCommandsMixin has the following device timeline properties:

[[layout]], of type GPURenderPassLayout, readonly

The layout of the render pass.

[[depthReadOnly]], of type boolean, readonly

If true, indicates that the depth component is not modified.

[[stencilReadOnly]], of type boolean, readonly

If true, indicates that the stencil component is not modified.

[[usage scope]], of type usage scope, initially empty

The usage scope for this render pass or bundle.

[[pipeline]], of type GPURenderPipeline, initially null

The current GPURenderPipeline.

[[index_buffer]], of type GPUBuffer, initially null

The current buffer to read index data from.

[[index_format]], of type GPUIndexFormat

The format of the index data in [[index_buffer]].

[[index_buffer_offset]], of type GPUSize64

The offset in bytes of the section of [[index_buffer]] currently set.

[[index_buffer_size]], of type GPUSize64

The size in bytes of the section of [[index_buffer]] currently set, initially 0.

[[vertex_buffers]], of type ordered map<slot, GPUBuffer>, initially empty

The current GPUBuffers to read vertex data from for each slot.

[[vertex_buffer_sizes]], of type ordered map<slot, GPUSize64>, initially empty

The size in bytes of the section of GPUBuffer currently set for each slot.

[[drawCount]], of type GPUSize64

The number of draw commands recorded in this encoder.

To Enqueue a render command on GPURenderCommandsMixin encoder which issues the steps of a GPU Command command with RenderState renderState, run the following device timeline steps:
  1. Append command to encoder.[[commands]].

  2. When command is executed as part of a GPUCommandBuffer commandBuffer:

    1. Issue the steps of command with commandBuffer.[[renderState]] as renderState.

17.2.1. Drawing

setPipeline(pipeline)

Sets the current GPURenderPipeline.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.setPipeline(pipeline) method.
Parameter Type Nullable Optional Description
pipeline GPURenderPipeline The render pipeline to use for subsequent drawing commands.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let pipelineTargetsLayout be derive render targets layout from pipeline(pipeline.[[descriptor]]).

  3. If any of the following conditions are unsatisfied, invalidate this and return.

  4. Set this.[[pipeline]] to be pipeline.

setIndexBuffer(buffer, indexFormat, offset, size)

Sets the current index buffer.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.setIndexBuffer(buffer, indexFormat, offset, size) method.
Parameter Type Nullable Optional Description
buffer GPUBuffer Buffer containing index data to use for subsequent drawing commands.
indexFormat GPUIndexFormat Format of the index data contained in buffer.
offset GPUSize64 Offset in bytes into buffer where the index data begins. Defaults to 0.
size GPUSize64 Size in bytes of the index data in buffer. Defaults to the size of the buffer minus the offset.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If size is missing, set size to max(0, buffer.size - offset).

  3. If any of the following conditions are unsatisfied, invalidate this and return.

  4. Add buffer to [[usage scope]] with usage input.

  5. Set this.[[index_buffer]] to be buffer.

  6. Set this.[[index_format]] to be indexFormat.

  7. Set this.[[index_buffer_offset]] to be offset.

  8. Set this.[[index_buffer_size]] to be size.

setVertexBuffer(slot, buffer, offset, size)

Sets the current vertex buffer for the given slot.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.setVertexBuffer(slot, buffer, offset, size) method.
Parameter Type Nullable Optional Description
slot GPUIndex32 The vertex buffer slot to set the vertex buffer for.
buffer GPUBuffer? Buffer containing vertex data to use for subsequent drawing commands.
offset GPUSize64 Offset in bytes into buffer where the vertex data begins. Defaults to 0.
size GPUSize64 Size in bytes of the vertex data in buffer. Defaults to the size of the buffer minus the offset.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let bufferSize be 0 if buffer is null, or buffer.size if not.

  3. If size is missing, set size to max(0, bufferSize - offset).

  4. If any of the following requirements are unmet, invalidate this and return.

  5. If buffer is null:

    1. Remove this.[[vertex_buffers]][slot].

    2. Remove this.[[vertex_buffer_sizes]][slot].

    Otherwise:

    1. If any of the following requirements are unmet, invalidate this and return.

    2. Add buffer to [[usage scope]] with usage input.

    3. Set this.[[vertex_buffers]][slot] to be buffer.

    4. Set this.[[vertex_buffer_sizes]][slot] to be size.

draw(vertexCount, instanceCount, firstVertex, firstInstance)

Draws primitives. See § 23.2 Rendering for the detailed specification.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.draw(vertexCount, instanceCount, firstVertex, firstInstance) method.
Parameter Type Nullable Optional Description
vertexCount GPUSize32 The number of vertices to draw.
instanceCount GPUSize32 The number of instances to draw.
firstVertex GPUSize32 Offset into the vertex buffers, in vertices, to begin drawing from.
firstInstance GPUSize32 First instance to draw.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. All of the requirements in the following steps must be met. If any are unmet, invalidate this and return.

    1. It must be valid to draw with this.

    2. Let buffers be this.[[pipeline]].[[descriptor]].vertex.buffers.

    3. For each GPUIndex32 slot from 0 to buffers.size (non-inclusive):

      1. If buffers[slot] is null, continue.

      2. Let bufferSize be this.[[vertex_buffer_sizes]][slot].

      3. Let stride be buffers[slot].arrayStride.

      4. Let attributes be buffers[slot].attributes

      5. Let lastStride be the maximum value of (attribute.offset + byteSize(attribute.format)) over each attribute in attributes, or 0 if attributes is empty.

      6. Let strideCount be computed based on buffers[slot].stepMode:

        "vertex"

        firstVertex + vertexCount

        "instance"

        firstInstance + instanceCount

      7. If strideCount0:

        1. (strideCount1) × stride + lastStride must be ≤ bufferSize.

  3. Increment this.[[drawCount]] by 1.

  4. Let bindingState be a snapshot of this’s current state.

  5. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of vertexCount vertices, starting with vertex firstVertex, with the states from bindingState and renderState.

drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance)

Draws indexed primitives. See § 23.2 Rendering for the detailed specification.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance) method.
Parameter Type Nullable Optional Description
indexCount GPUSize32 The number of indices to draw.
instanceCount GPUSize32 The number of instances to draw.
firstIndex GPUSize32 Offset into the index buffer, in indices, begin drawing from.
baseVertex GPUSignedOffset32 Added to each index value before indexing into the vertex buffers.
firstInstance GPUSize32 First instance to draw.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Increment this.[[drawCount]] by 1.

  4. Let bindingState be a snapshot of this’s current state.

  5. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of indexCount indexed vertices, starting with index firstIndex from vertex baseVertex, with the states from bindingState and renderState.

Note: WebGPU applications should never use index data with indices out of bounds of any bound vertex buffer that has GPUVertexStepMode "vertex". WebGPU implementations have different ways of handling this, and therefore a range of behaviors is allowed. Either the whole draw call is discarded, or the access to those attributes out of bounds is described by WGSL’s invalid memory reference.

drawIndirect(indirectBuffer, indirectOffset)

Draws primitives using parameters read from a GPUBuffer. See § 23.2 Rendering for the detailed specification.

The indirect draw parameters encoded in the buffer must be a tightly packed block of four 32-bit unsigned integer values (16 bytes total), given in the same order as the arguments for draw(). For example:

let drawIndirectParameters = new Uint32Array(4);
drawIndirectParameters[0] = vertexCount;
drawIndirectParameters[1] = instanceCount;
drawIndirectParameters[2] = firstVertex;
drawIndirectParameters[3] = firstInstance;

The value corresponding to firstInstance must be 0, unless the "indirect-first-instance" feature is enabled. If the "indirect-first-instance" feature is not enabled and firstInstance is not zero the drawIndirect() call will be treated as a no-op.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.drawIndirect(indirectBuffer, indirectOffset) method.
Parameter Type Nullable Optional Description
indirectBuffer GPUBuffer Buffer containing the indirect draw parameters.
indirectOffset GPUSize64 Offset in bytes into indirectBuffer where the drawing data begins.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Add indirectBuffer to [[usage scope]] with usage input.

  4. Increment this.[[drawCount]] by 1.

  5. Let bindingState be a snapshot of this’s current state.

  6. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Let vertexCount be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.

  2. Let instanceCount be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 4) bytes.

  3. Let firstVertex be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 8) bytes.

  4. Let firstInstance be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 12) bytes.

  5. Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of vertexCount vertices, starting with vertex firstVertex, with the states from bindingState and renderState.

drawIndexedIndirect(indirectBuffer, indirectOffset)

Draws indexed primitives using parameters read from a GPUBuffer. See § 23.2 Rendering for the detailed specification.

The indirect drawIndexed parameters encoded in the buffer must be a tightly packed block of five 32-bit values (20 bytes total), given in the same order as the arguments for drawIndexed(). The value corresponding to baseVertex is a signed 32-bit integer, and all others are unsigned 32-bit integers. For example:

let drawIndexedIndirectParameters = new Uint32Array(5);
let drawIndexedIndirectParametersSigned = new Int32Array(drawIndexedIndirectParameters.buffer);
drawIndexedIndirectParameters[0] = indexCount;
drawIndexedIndirectParameters[1] = instanceCount;
drawIndexedIndirectParameters[2] = firstIndex;
// baseVertex is a signed value.
drawIndexedIndirectParametersSigned[3] = baseVertex;
drawIndexedIndirectParameters[4] = firstInstance;

The value corresponding to firstInstance must be 0, unless the "indirect-first-instance" feature is enabled. If the "indirect-first-instance" feature is not enabled and firstInstance is not zero the drawIndexedIndirect() call will be treated as a no-op.

Called on: GPURenderCommandsMixin this.

Arguments:

Arguments for the GPURenderCommandsMixin.drawIndexedIndirect(indirectBuffer, indirectOffset) method.
Parameter Type Nullable Optional Description
indirectBuffer GPUBuffer Buffer containing the indirect drawIndexed parameters.
indirectOffset GPUSize64 Offset in bytes into indirectBuffer where the drawing data begins.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Add indirectBuffer to [[usage scope]] with usage input.

  4. Increment this.[[drawCount]] by 1.

  5. Let bindingState be a snapshot of this’s current state.

  6. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Let indexCount be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.

  2. Let instanceCount be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 4) bytes.

  3. Let firstIndex be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 8) bytes.

  4. Let baseVertex be a signed 32-bit integer read from indirectBuffer at (indirectOffset + 12) bytes.

  5. Let firstInstance be an unsigned 32-bit integer read from indirectBuffer at (indirectOffset + 16) bytes.

  6. Draw instanceCount instances, starting with instance firstInstance, of primitives consisting of indexCount indexed vertices, starting with index firstIndex from vertex baseVertex, with the states from bindingState and renderState.

To determine if it’s valid to draw with GPURenderCommandsMixin encoder, run the following device timeline steps:
  1. If any of the following conditions are unsatisfied, return false:

  2. Otherwise return true.

To determine if it’s valid to draw indexed with GPURenderCommandsMixin encoder, run the following device timeline steps:
  1. If any of the following conditions are unsatisfied, return false:

  2. Otherwise return true.

17.2.2. Rasterization state

The GPURenderPassEncoder has several methods which affect how draw commands are rasterized to attachments used by this encoder.

setViewport(x, y, width, height, minDepth, maxDepth)

Sets the viewport used during the rasterization stage to linearly map from normalized device coordinates to viewport coordinates.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setViewport(x, y, width, height, minDepth, maxDepth) method.
Parameter Type Nullable Optional Description
x float Minimum X value of the viewport in pixels.
y float Minimum Y value of the viewport in pixels.
width float Width of the viewport in pixels.
height float Height of the viewport in pixels.
minDepth float Minimum depth value of the viewport.
maxDepth float Maximum depth value of the viewport.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Let maxViewportRange be this.limits.maxTextureDimension2D × 2.

  3. If any of the following conditions are unsatisfied, invalidate this and return.

    • x ≥ -maxViewportRange

    • y ≥ -maxViewportRange

    • 0widththis.limits.maxTextureDimension2D

    • 0heightthis.limits.maxTextureDimension2D

    • x + widthmaxViewportRange1

    • y + heightmaxViewportRange1

    • 0.0minDepth1.0

    • 0.0maxDepth1.0

    • minDepthmaxDepth

  4. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Round x, y, width, and height to some uniform precision, no less precise than integer rounding.

  2. Set renderState.[[viewport]] to the extents x, y, width, height, minDepth, and maxDepth.

setScissorRect(x, y, width, height)

Sets the scissor rectangle used during the rasterization stage. After transformation into viewport coordinates any fragments which fall outside the scissor rectangle will be discarded.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setScissorRect(x, y, width, height) method.
Parameter Type Nullable Optional Description
x GPUIntegerCoordinate Minimum X value of the scissor rectangle in pixels.
y GPUIntegerCoordinate Minimum Y value of the scissor rectangle in pixels.
width GPUIntegerCoordinate Width of the scissor rectangle in pixels.
height GPUIntegerCoordinate Height of the scissor rectangle in pixels.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Set renderState.[[scissorRect]] to the extents x, y, width, and height.

setBlendConstant(color)

Sets the constant blend color and alpha values used with "constant" and "one-minus-constant" GPUBlendFactors.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setBlendConstant(color) method.
Parameter Type Nullable Optional Description
color GPUColor The color to use when blending.

Returns: undefined

Content timeline steps:

  1. ? validate GPUColor shape(color).

  2. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Set renderState.[[blendConstant]] to color.

setStencilReference(reference)

Sets the [[stencilReference]] value used during stencil tests with the "replace" GPUStencilOperation.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setStencilReference(reference) method.
Parameter Type Nullable Optional Description
reference GPUStencilValue The new stencil reference value.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Set renderState.[[stencilReference]] to reference.

17.2.3. Queries

beginOcclusionQuery(queryIndex)
Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.beginOcclusionQuery(queryIndex) method.
Parameter Type Nullable Optional Description
queryIndex GPUSize32 The index of the query in the query set.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Set this.[[occlusion_query_active]] to true.

  4. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Set renderState.[[occlusionQueryIndex]] to queryIndex.

endOcclusionQuery()
Called on: GPURenderPassEncoder this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. Set this.[[occlusion_query_active]] to false.

  4. Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.

Queue timeline steps:
  1. Let passingFragments be non-zero if any fragment samples passed all per-fragment tests since the corresponding beginOcclusionQuery() command was executed, and zero otherwise.

    Note: If no draw calls occurred, passingFragments is zero.

  2. Write passingFragments into this.[[occlusion_query_set]] at index renderState.[[occlusionQueryIndex]].

17.2.4. Bundles

executeBundles(bundles)

Executes the commands previously recorded into the given GPURenderBundles as part of this render pass.

When a GPURenderBundle is executed, it does not inherit the render pass’s pipeline, bind groups, or vertex and index buffers. After a GPURenderBundle has executed, the render pass’s pipeline, bind group, and vertex/index buffer state is cleared (to the initial, empty values).

Note: The state is cleared, not restored to the previous state. This occurs even if zero GPURenderBundles are executed.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.executeBundles(bundles) method.
Parameter Type Nullable Optional Description
bundles sequence<GPURenderBundle> List of render bundles to execute.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.[[device]].

Device timeline steps:
  1. Validate the encoder state of this. If it returns false, return.

  2. If any of the following conditions are unsatisfied, invalidate this and return.

  3. For each bundle in bundles:

    1. Increment this.[[drawCount]] by bundle.[[drawCount]].

    2. Merge bundle.[[usage scope]] into this.[[usage scope]].

    3. Enqueue a render command on this which issues the following steps on the Queue timeline with renderState when executed:

      Queue timeline steps:
      1. Execute each command in bundle.[[command_list]] with renderState.

        Note: renderState cannot be changed by executing render bundles. Binding state was already captured at bundle encoding time, and so isn’t used when executing bundles.

  4. Reset the render pass binding state of this.

To Reset the render pass binding state of GPURenderPassEncoder encoder run the following device timeline steps:
  1. Clear encoder.[[bind_groups]].

  2. Set encoder.[[pipeline]] to null.

  3. Set encoder.[[index_buffer]] to null.

  4. Clear encoder.[[vertex_buffers]].

18. Bundles

A bundle is a partial, limited pass that is encoded once and can then be executed multiple times as part of future pass encoders without expiring after use like typical command buffers. This can reduce the overhead of encoding and submission of commands which are issued repeatedly without changing.

18.1. GPURenderBundle

[Exposed=(Window, Worker), SecureContext]
interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;
[[command_list]], of type list<GPU command>

A list of GPU commands to be submitted to the GPURenderPassEncoder when the GPURenderBundle is executed.

[[usage scope]], of type usage scope, initially empty

The usage scope for this render bundle, stored for later merging into the GPURenderPassEncoder’s [[usage scope]] in executeBundles().

[[layout]], of type GPURenderPassLayout

The layout of the render bundle.

[[depthReadOnly]], of type boolean

If true, indicates that the depth component is not modified by executing this render bundle.

[[stencilReadOnly]], of type boolean

If true, indicates that the stencil component is not modified by executing this render bundle.

[[drawCount]], of type GPUSize64

The number of draw commands in this GPURenderBundle.

18.1.1. Render Bundle Creation

dictionary GPURenderBundleDescriptor
         : GPUObjectDescriptorBase {
};
[Exposed=(Window, Worker), SecureContext]
interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUCommandsMixin;
GPURenderBundleEncoder includes GPUDebugCommandsMixin;
GPURenderBundleEncoder includes GPUBindingCommandsMixin;
GPURenderBundleEncoder includes GPURenderCommandsMixin;
createRenderBundleEncoder(descriptor)

Creates a GPURenderBundleEncoder.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createRenderBundleEncoder(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderBundleEncoderDescriptor Description of the GPURenderBundleEncoder to create.

Returns: GPURenderBundleEncoder

Content timeline steps:

  1. ? Validate texture format required features of each non-null element of descriptor.colorFormats with this.[[device]].

  2. If descriptor.depthStencilFormat is provided:

    1. ? Validate texture format required features of descriptor.depthStencilFormat with this.[[device]].

  3. Let e be ! create a new WebGPU object(this, GPURenderBundleEncoder, descriptor).

  4. Issue the initialization steps on the Device timeline of this.

  5. Return e.

Device timeline initialization steps:
  1. If any of the following conditions are unsatisfied generate a validation error, invalidate e and return.

  2. Set e.[[layout]] to a copy of descriptor’s included GPURenderPassLayout interface.

  3. Set e.[[depthReadOnly]] to descriptor.depthReadOnly.

  4. Set e.[[stencilReadOnly]] to descriptor.stencilReadOnly.

  5. Set e.[[state]] to "open".

  6. Set e.[[drawCount]] to 0.

18.1.2. Encoding

dictionary GPURenderBundleEncoderDescriptor
         : GPURenderPassLayout {
    boolean depthReadOnly = false;
    boolean stencilReadOnly = false;
};
depthReadOnly, of type boolean, defaulting to false

If true, indicates that the render bundle does not modify the depth component of the GPURenderPassDepthStencilAttachment of any render pass the render bundle is executed in.

See read-only depth-stencil.

stencilReadOnly, of type boolean, defaulting to false

If true, indicates that the render bundle does not modify the stencil component of the GPURenderPassDepthStencilAttachment of any render pass the render bundle is executed in.

See read-only depth-stencil.

18.1.3. Finalization

finish(descriptor)

Completes recording of the render bundle commands sequence.

Called on: GPURenderBundleEncoder this.

Arguments:

Arguments for the GPURenderBundleEncoder.finish(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderBundleDescriptor

Returns: GPURenderBundle

Content timeline steps:

  1. Let renderBundle be a new GPURenderBundle.

  2. Issue the finish steps on the Device timeline of this.[[device]].

  3. Return renderBundle.

Device timeline finish steps:
  1. Let validationSucceeded be true if all of the following requirements are met, and false otherwise.

  2. Set this.[[state]] to "ended".

  3. If validationSucceeded is false, then:

    1. Generate a validation error.

    2. Return an invalidated GPURenderBundle.

  4. Set renderBundle.[[command_list]] to this.[[commands]].

  5. Set renderBundle.[[usage scope]] to this.[[usage scope]].

  6. Set renderBundle.[[drawCount]] to this.[[drawCount]].

19. Queues

19.1. GPUQueueDescriptor

GPUQueueDescriptor describes a queue request.

dictionary GPUQueueDescriptor
         : GPUObjectDescriptorBase {
};

19.2. GPUQueue

[Exposed=(Window, Worker), SecureContext]
interface GPUQueue {
    undefined submit(sequence<GPUCommandBuffer> commandBuffers);

    Promise<undefined> onSubmittedWorkDone();

    undefined writeBuffer(
        GPUBuffer buffer,
        GPUSize64 bufferOffset,
        AllowSharedBufferSource data,
        optional GPUSize64 dataOffset = 0,
        optional GPUSize64 size);

    undefined writeTexture(
        GPUTexelCopyTextureInfo destination,
        AllowSharedBufferSource data,
        GPUTexelCopyBufferLayout dataLayout,
        GPUExtent3D size);

    undefined copyExternalImageToTexture(
        GPUCopyExternalImageSourceInfo source,
        GPUCopyExternalImageDestInfo destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;

GPUQueue has the following methods:

writeBuffer(buffer, bufferOffset, data, dataOffset, size)

Issues a write operation of the provided data into a GPUBuffer.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.writeBuffer(buffer, bufferOffset, data, dataOffset, size) method.
Parameter Type Nullable Optional Description
buffer GPUBuffer The buffer to write to.
bufferOffset GPUSize64 Offset in bytes into buffer to begin writing at.
data AllowSharedBufferSource Data to write into buffer.
dataOffset GPUSize64 Offset in into data to begin writing from. Given in elements if data is a TypedArray and bytes otherwise.
size GPUSize64 Size of content to write from data to buffer. Given in elements if data is a TypedArray and bytes otherwise.

Returns: undefined

Content timeline steps:

  1. If data is an ArrayBuffer or DataView, let the element type be "byte". Otherwise, data is a TypedArray; let the element type be the type of the TypedArray.

  2. Let dataSize be the size of data, in elements.

  3. If size is missing, let contentsSize be dataSizedataOffset. Otherwise, let contentsSize be size.

  4. If any of the following conditions are unsatisfied, throw an OperationError and return.

    • contentsSize ≥ 0.

    • dataOffset + contentsSizedataSize.

    • contentsSize, converted to bytes, is a multiple of 4 bytes.

  5. Let dataContents be a copy of the bytes held by the buffer source data.

  6. Let contents be the contentsSize elements of dataContents starting at an offset of dataOffset elements.

  7. Issue the subsequent steps on the Device timeline of this.

Device timeline steps:
  1. If any of the following conditions are unsatisfied, generate a validation error and return.

  2. Issue the subsequent steps on the Queue timeline of this.

Queue timeline steps:
  1. Write contents into buffer starting at bufferOffset.

writeTexture(destination, data, dataLayout, size)

Issues a write operation of the provided data into a GPUTexture.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.writeTexture(destination, data, dataLayout, size) method.
Parameter Type Nullable Optional Description
destination GPUTexelCopyTextureInfo The texture subresource and origin to write to.
data AllowSharedBufferSource Data to write into destination.
dataLayout GPUTexelCopyBufferLayout Layout of the content in data.
size GPUExtent3D Extents of the content to write from data to destination.

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin3D shape(destination.origin).

  2. ? validate GPUExtent3D shape(size).

  3. Let dataBytes be a copy of the bytes held by the buffer source data.

    Note: This is described as copying all of data to the device timeline, but in practice data could be much larger than necessary. Implementations should optimize by copying only the necessary bytes.

  4. Issue the subsequent steps on the Device timeline of this.

Device timeline steps:
  1. Let aligned be false.

  2. Let dataLength be dataBytes.length.

  3. If any of the following conditions are unsatisfied, generate a validation error and return.

    Note: unlike GPUCommandEncoder.copyBufferToTexture(), there is no alignment requirement on either dataLayout.bytesPerRow or dataLayout.offset.

  4. Issue the subsequent steps on the Queue timeline of this.

Queue timeline steps:
  1. Let blockWidth be the texel block width of destination.texture.

  2. Let blockHeight be the texel block height of destination.texture.

  3. Let dstOrigin be destination.origin;

  4. Let dstBlockOriginX be (dstOrigin.x ÷ blockWidth).

  5. Let dstBlockOriginY be (dstOrigin.y ÷ blockHeight).

  6. Let blockColumns be (copySize.width ÷ blockWidth).

  7. Let blockRows be (copySize.height ÷ blockHeight).

  8. Assert that dstBlockOriginX, dstBlockOriginY, blockColumns, and blockRows are integers.

  9. For each z in the range [0, copySize.depthOrArrayLayers − 1]:

    1. Let dstSubregion be texture copy sub-region (z + dstOrigin.z) of destination.

    2. For each y in the range [0, blockRows − 1]:

      1. For each x in the range [0, blockColumns − 1]:

        1. Let blockOffset be the texel block byte offset of dataLayout for (x, y, z) of destination.texture.

        2. Set texel block (dstBlockOriginX + x, dstBlockOriginY + y) of dstSubregion to be an equivalent texel representation to the texel block described by dataBytes at offset blockOffset.

copyExternalImageToTexture(source, destination, copySize)

Issues a copy operation of the contents of a platform image/canvas into the destination texture.

This operation performs color encoding into the destination encoding according to the parameters of GPUCopyExternalImageDestInfo.

Copying into a -srgb texture results in the same texture bytes, not the same decoded values, as copying into the corresponding non--srgb format. Thus, after a copy operation, sampling the destination texture has different results depending on whether its format is -srgb, all else unchanged.

NOTE:
When copying from a "webgl"/"webgl2" context canvas, the WebGL Drawing Buffer may be not exist during certain points in the frame presentation cycle (after the image has been moved to the compositor for display). To avoid this, either:
  • Issue copyExternalImageToTexture() in the same task with WebGL rendering operation, to ensure the copy occurs before the WebGL canvas is presented.

  • If not possible, set the preserveDrawingBuffer option in WebGLContextAttributes to true, so that the drawing buffer will still contain a copy of the frame contents after they’ve been presented. Note, this extra copy may have a performance cost.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.copyExternalImageToTexture(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUCopyExternalImageSourceInfo source image and origin to copy to destination.
destination GPUCopyExternalImageDestInfo The texture subresource and origin to write to, and its encoding metadata.
copySize GPUExtent3D Extents of the content to write from source to destination.

Returns: undefined

Content timeline steps:

  1. ? validate GPUOrigin2D shape(source.origin).

  2. ? validate GPUOrigin3D shape(destination.origin).

  3. ? validate GPUExtent3D shape(copySize).

  4. Let sourceImage be source.source

  5. If sourceImage is not origin-clean, throw a SecurityError and return.

  6. If any of the following requirements are unmet, throw an OperationError and return.

    • source.origin.x + copySize.width must be ≤ the width of sourceImage.

    • source.origin.y + copySize.height must be ≤ the height of sourceImage.

    • copySize.depthOrArrayLayers must be ≤ 1.

  7. Let usability be ? check the usability of the image argument(source).

  8. Issue the subsequent steps on the Device timeline of this.

Device timeline steps:
  1. Let texture be destination.texture.

  2. If any of the following requirements are unmet, generate a validation error and return.

  3. If copySize.depthOrArrayLayers is > 0, issue the subsequent steps on the Queue timeline of this.

Queue timeline steps:
  1. Assert that the texel block width of destination.texture is 1, the texel block height of destination.texture is 1, and that copySize.depthOrArrayLayers is 1.

  2. Let srcOrigin be source.origin.

  3. Let dstOrigin be destination.origin.

  4. Let dstSubregion be texture copy sub-region (dstOrigin.z) of destination.

  5. For each y in the range [0, copySize.height − 1]:

    1. Let srcY be y if source.flipY is false and (copySize.height − 1 − y) otherwise.

    2. For each x in the range [0, copySize.width − 1]:

      1. Set texel block (dstOrigin.x + x, dstOrigin.y + y) of dstSubregion to be an equivalent texel representation of the pixel at (srcOrigin.x + x, srcOrigin.y + srcY) of source.source after applying any color encoding required by destination.colorSpace and destination.premultipliedAlpha.

submit(commandBuffers)

Schedules the execution of the command buffers by the GPU on this queue.

Submitted command buffers cannot be used again.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.submit(commandBuffers) method.
Parameter Type Nullable Optional Description
commandBuffers sequence<GPUCommandBuffer>

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this:

Device timeline steps:
  1. If any of the following requirements are unmet, generate a validation error, invalidate each GPUCommandBuffer in commandBuffers and return.

  2. For each commandBuffer in commandBuffers:

    1. Invalidate commandBuffer.

  3. Issue the subsequent steps on the Queue timeline of this:

Queue timeline steps:
  1. For each commandBuffer in commandBuffers:

    1. Execute each command in commandBuffer.[[command_list]].

onSubmittedWorkDone()

Returns a Promise that resolves once this queue finishes processing all the work submitted up to this moment.

Resolution of this Promise implies the completion of mapAsync() calls made prior to that call, on GPUBuffers last used exclusively on that queue.

Called on: GPUQueue this.

Returns: Promise<undefined>

Content timeline steps:

  1. Let contentTimeline be the current Content timeline.

  2. Let promise be a new promise.

  3. Issue the synchronization steps on the Device timeline of this.

  4. Return promise.

Device timeline synchronization steps:
  1. Let event occur upon the completion of all currently-enqueued operations.

  2. Listen for timeline event event on this.[[device]], handled by the subsequent steps on contentTimeline.

Content timeline steps:
  1. Resolve promise.

20. Queries

20.1. GPUQuerySet

[Exposed=(Window, Worker), SecureContext]
interface GPUQuerySet {
    undefined destroy();

    readonly attribute GPUQueryType type;
    readonly attribute GPUSize32Out count;
};
GPUQuerySet includes GPUObjectBase;

GPUQuerySet has the following immutable properties:

type, of type GPUQueryType, readonly

The type of the queries managed by this GPUQuerySet.

count, of type GPUSize32Out, readonly

The number of queries managed by this GPUQuerySet.

GPUQuerySet has the following device timeline properties:

[[destroyed]], of type boolean, initially false

If the query set is destroyed, it can no longer be used in any operation, and its underlying memory can be freed.

20.1.1. QuerySet Creation

A GPUQuerySetDescriptor specifies the options to use in creating a GPUQuerySet.

dictionary GPUQuerySetDescriptor
         : GPUObjectDescriptorBase {
    required GPUQueryType type;
    required GPUSize32 count;
};
type, of type GPUQueryType

The type of queries managed by GPUQuerySet.

count, of type GPUSize32

The number of queries managed by GPUQuerySet.

createQuerySet(descriptor)

Creates a GPUQuerySet.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createQuerySet(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUQuerySetDescriptor Description of the GPUQuerySet to create.

Returns: GPUQuerySet

Content timeline steps:

  1. If descriptor.type is "timestamp", but "timestamp-query" is not enabled for this:

    1. Throw a TypeError.

  2. Let q be ! create a new WebGPU object(this, GPUQuerySet, descriptor).

  3. Set q.type to descriptor.type.

  4. Set q.count to descriptor.count.

  5. Issue the initialization steps on the Device timeline of this.

  6. Return q.

Device timeline initialization steps:
  1. If any of the following requirements are unmet, generate a validation error, invalidate q and return.

    • this must not be lost.

    • descriptor.count must be ≤ 4096.

  2. Create a device allocation for q where each entry in the query set is zero.

    If the allocation fails without side-effects, generate an out-of-memory error, invalidate q, and return.

Creating a GPUQuerySet which holds 32 occlusion query results.
const querySet = gpuDevice.createQuerySet({
    type: 'occlusion',
    count: 32
});

20.1.2. Query Set Destruction

An application that no longer requires a GPUQuerySet can choose to lose access to it before garbage collection by calling destroy().

GPUQuerySet has the following methods:

destroy()

Destroys the GPUQuerySet.

Called on: GPUQuerySet this.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the device timeline.

Device timeline steps:
  1. Set this.[[destroyed]] to true.

20.2. QueryType

enum GPUQueryType {
    "occlusion",
    "timestamp",
};

20.3. Occlusion Query

Occlusion query is only available on render passes, to query the number of fragment samples that pass all the per-fragment tests for a set of drawing commands, including scissor, sample mask, alpha to coverage, stencil, and depth tests. Any non-zero result value for the query indicates that at least one sample passed the tests and reached the output merging stage of the render pipeline, 0 indicates that no samples passed the tests.

When beginning a render pass, GPURenderPassDescriptor.occlusionQuerySet must be set to be able to use occlusion queries during the pass. An occlusion query is begun and ended by calling beginOcclusionQuery() and endOcclusionQuery() in pairs that cannot be nested, and resolved into a GPUBuffer as a 64-bit unsigned integer by GPUCommandEncoder.resolveQuerySet().

20.4. Timestamp Query

Timestamp queries allow applications to write timestamps to a GPUQuerySet, using:

and then resolve timestamp values (in nanoseconds as a 64-bit unsigned integer) into a GPUBuffer, using GPUCommandEncoder.resolveQuerySet().

Timestamp values are implementation-defined and may not increase monotonically. The physical device may reset the timestamp counter occasionally, which can result in unexpected values such as negative deltas between timestamps that logically should be monotonically increasing. These instances should be rare and can safely be ignored. Applications should not be written in such a way that unexpected timestamps cause an application failure.

There is a tracking vector here. Timestamp queries are implemented using high-resolution timers (see § 2.1.7.2 Device/queue-timeline timing). To mitigate security and privacy concerns, their precision must be reduced:

To get the current queue timestamp, run the following queue timeline steps:

Note: Since cross-origin isolation may not apply to the device timeline or queue timeline, crossOriginIsolatedCapability is never set to true.

Validate timestampWrites(device, timestampWrites)

Arguments:

Device timeline steps:

  1. Return true if the following requirements are met, and false if not:

21. Canvas Rendering

21.1. HTMLCanvasElement.getContext()

A GPUCanvasContext object is created via the getContext() method of an HTMLCanvasElement instance by passing the string literal 'webgpu' as its contextType argument.

Get a GPUCanvasContext from an offscreen HTMLCanvasElement:
const canvas = document.createElement('canvas');
const context = canvas.getContext('webgpu');

Unlike WebGL or 2D context creation, the second argument of HTMLCanvasElement.getContext() or OffscreenCanvas.getContext(), the context creation attribute dictionary options, is ignored. Instead, use GPUCanvasContext.configure(), which allows changing the canvas configuration without replacing the canvas.

To create a 'webgpu' context on a canvas (HTMLCanvasElement or OffscreenCanvas) canvas, run the following content timeline steps:
  1. Let context be a new GPUCanvasContext.

  2. Set context.canvas to canvas.

  3. Replace the drawing buffer of context.

  4. Return context.

Note: User agents should consider issuing developer-visible warnings when an ignored options argument is provided when calling getContext() to get a WebGPU canvas context.

21.2. GPUCanvasContext

[Exposed=(Window, Worker), SecureContext]
interface GPUCanvasContext {
    readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;

    undefined configure(GPUCanvasConfiguration configuration);
    undefined unconfigure();

    GPUCanvasConfiguration? getConfiguration();
    GPUTexture getCurrentTexture();
};

GPUCanvasContext has the following content timeline properties:

canvas, of type (HTMLCanvasElement or OffscreenCanvas), readonly

The canvas this context was created from.

[[configuration]], of type GPUCanvasConfiguration?, initially null

The options this context is currently configured with.

null if the context has not been configured or has been unconfigured.

[[textureDescriptor]], of type GPUTextureDescriptor?, initially null

The currently configured texture descriptor, derived from the [[configuration]] and canvas.

null if the context has not been configured or has been unconfigured.

[[drawingBuffer]], an image, initially a transparent black image with the same size as the canvas

The drawing buffer is the working-copy image data of the canvas. It is exposed as writable by [[currentTexture]] (returned by getCurrentTexture()).

The drawing buffer is used to get a copy of the image contents of a context, which occurs when the canvas is displayed or otherwise read. It may be transparent, even if [[configuration]].alphaMode is "opaque". The alphaMode only affects the result of the "get a copy of the image contents of a context" algorithm.

The drawing buffer outlives the [[currentTexture]] and contains the previously-rendered contents even after the canvas has been presented. It is only cleared in Replace the drawing buffer.

Any time the drawing buffer is read, implementations must ensure that all previously submitted work (e.g. queue submissions) have completed writing to it via [[currentTexture]].

[[currentTexture]], of type GPUTexture?, initially null

The GPUTexture to draw into for the current frame. It exposes a writable view onto the underlying [[drawingBuffer]]. getCurrentTexture() populates this slot if null, then returns it.

In the steady-state of a visible canvas, any changes to the drawing buffer made through the currentTexture get presented when updating the rendering of a WebGPU canvas. At or before that point, the texture is also destroyed and [[currentTexture]] is set to to null, signalling that a new one is to be created by the next call to getCurrentTexture().

Destroying the currentTexture has no effect on the drawing buffer contents; it only terminates write-access to the drawing buffer early. During the same frame, getCurrentTexture() continues returning the same destroyed texture.

Expire the current texture sets the currentTexture to null. It is called by configure(), resizing the canvas, presentation, transferToImageBitmap(), and others.

[[lastPresentedImage]], of type (readonly image)?, initially null

The image most recently presented for this canvas in "updating the rendering of a WebGPU canvas". If the device is lost or destroyed, this image may be used as a fallback in "get a copy of the image contents of a context" in order to prevent the canvas from going blank.

Note: This property only needs to exist in implementations which implement the fallback, which is optional.

GPUCanvasContext has the following methods:

configure(configuration)

Configures the context for this canvas. This clears the drawing buffer to transparent black (in Replace the drawing buffer).

Called on: GPUCanvasContext this.

Arguments:

Arguments for the GPUCanvasContext.configure(configuration) method.
Parameter Type Nullable Optional Description
configuration GPUCanvasConfiguration Desired configuration for the context.

Returns: undefined

Content timeline steps:

  1. Let device be configuration.device.

  2. ? Validate texture format required features of configuration.format with device.[[device]].

  3. ? Validate texture format required features of each element of configuration.viewFormats with device.[[device]].

  4. If Supported context formats does not contain configuration.format, throw a TypeError.

  5. Let descriptor be the GPUTextureDescriptor for the canvas and configuration(this.canvas, configuration).

  6. Set this.[[configuration]] to configuration.

    NOTE:
    This spec requires supporting HDR via the toneMapping option. If a user agent only supports toneMapping: "standard", then the toneMapping member should not exist in GPUCanvasConfiguration, so it will not exist on the object returned by getConfiguration() and will not be accessed by configure()). This allows websites to detect feature support.
  7. Set this.[[textureDescriptor]] to descriptor.

  8. Replace the drawing buffer of this.

  9. Issue the subsequent steps on the Device timeline of device.

Device timeline steps:
  1. If any of the following requirements are unmet, generate a validation error and return.

    Note: This early validation remains valid until the next configure() call, except for validation of the size, which changes when the canvas is resized.

unconfigure()

Removes the context configuration. Destroys any textures produced while configured.

Called on: GPUCanvasContext this.

Returns: undefined

Content timeline steps:

  1. Set this.[[configuration]] to null.

  2. Set this.[[textureDescriptor]] to null.

  3. Replace the drawing buffer of this.

getConfiguration()

Returns the context configuration.

Called on: GPUCanvasContext this.

Returns: GPUCanvasConfiguration or null

Content timeline steps:

  1. Let configuration be a copy of this.[[configuration]].

  2. Return configuration.

NOTE:
In scenarios where getConfiguration() shows that toneMapping is implemented and the dynamic-range media query indicates HDR support, then WebGPU canvas should render content using the full HDR range instead of clamping values to the SDR range of the HDR display.
getCurrentTexture()

Get the GPUTexture that will be composited to the document by the GPUCanvasContext next.

NOTE:
An application should call getCurrentTexture() in the same task that renders to the canvas texture. Otherwise, the texture could get destroyed by these steps before the application is finished rendering to it.

The expiry task (defined below) is optional to implement. Even if implemented, task source priority is not normatively defined, so may happen as early as the next task, or as late as after all other task sources are empty (see automatic expiry task source). Expiry is only guaranteed when a visible canvas is displayed (updating the rendering of a WebGPU canvas) and in other callers of "Expire the current texture".

Called on: GPUCanvasContext this.

Returns: GPUTexture

Content timeline steps:

  1. If this.[[configuration]] is null, throw an InvalidStateError and return.

  2. Assert this.[[textureDescriptor]] is not null.

  3. Let device be this.[[configuration]].device.

  4. If this.[[currentTexture]] is null:

    1. Replace the drawing buffer of this.

    2. Set this.[[currentTexture]] to the result of calling device.createTexture() with this.[[textureDescriptor]], except with the GPUTexture’s underlying storage pointing to this.[[drawingBuffer]].

      Note: If the texture can’t be created (e.g. due to validation failure or out-of-memory), this generates and error and returns an invalidated GPUTexture. Some validation here is redundant with that done in configure(). Implementations must not skip this redundant validation.

  5. Optionally, queue an automatic expiry task with device device and the following steps:

    1. Expire the current texture of this.

      Note: If this already happened when updating the rendering of a WebGPU canvas, it has no effect.

  6. Return this.[[currentTexture]].

Note: The same GPUTexture object will be returned by every call to getCurrentTexture() until "Expire the current texture" runs, even if that GPUTexture is destroyed, failed validation, or failed to allocate.

To get a copy of the image contents of a context:

Arguments:

Returns: image contents

Content timeline steps:

  1. Let snapshot be a transparent black image of the same size as context.canvas.

  2. Let configuration be context.[[configuration]].

  3. If configuration is null:

    1. Return snapshot.

    Note: The configuration will be null if the context has not been configured or has been unconfigured. This is identical to the behavior when the canvas has no context.

  4. Ensure that all submitted work items (e.g. queue submissions) have completed writing to the image (via context.[[currentTexture]]).

  5. If configuration.device is found to be valid:

    1. Set snapshot to a copy of the context.[[drawingBuffer]].

    Else, if context.[[lastPresentedImage]] is not null:

    1. Optionally, set snapshot to a copy of context.[[lastPresentedImage]].

      Note: This is optional because the [[lastPresentedImage]] may no longer exist, depending on what caused device loss. Implementations may choose to skip it even if do they still have access to that image.

  6. Let alphaMode be configuration.alphaMode.

  7. If alphaMode is "opaque":
    1. Clear the alpha channel of snapshot to 1.0.

    2. Tag snapshot as being opaque.

    Note: If the [[currentTexture]], if any, has been destroyed (for example in "Expire the current texture"), the alpha channel is unobservable, and implementations may clear the alpha channel in-place.

    Otherwise:

    Tag snapshot with alphaMode.

  8. Tag snapshot with the colorSpace and toneMapping of configuration.

  9. Return snapshot.

To Replace the drawing buffer of a GPUCanvasContext context, run the following content timeline steps:
  1. Expire the current texture of context.

  2. Let configuration be context.[[configuration]].

  3. Set context.[[drawingBuffer]] to a transparent black image of the same size as context.canvas.

    • If configuration is null, the drawing buffer is tagged with the color space "srgb". In this case, the drawing buffer will remain blank until the context is configured.

    • If not, the drawing buffer has the specified configuration.format and is tagged with the specified configuration.colorSpace and configuration.toneMapping.

    Note: configuration.alphaMode is ignored until "get a copy of the image contents of a context".

    NOTE:
    A newly replaced drawing buffer image behaves as if it is cleared to transparent black, but, like after "discard", an implementation can clear it lazily only if it becomes necessary.

    Note: This will often be a no-op, if the drawing buffer is already cleared and has the correct configuration.

To Expire the current texture of a GPUCanvasContext context, run the following content timeline steps:
  1. If context.[[currentTexture]] is not null:

    1. Call context.[[currentTexture]].destroy() (without destroying context.[[drawingBuffer]]) to terminate write access to the image.

    2. Set context.[[currentTexture]] to null.

21.3. HTML Specification Hooks

The following algorithms "hook" into algorithms in the HTML specification, and must run at the specified points.

When the "bitmap" is read from an HTMLCanvasElement or OffscreenCanvas with a GPUCanvasContext context, run the following content timeline steps:
  1. Return a copy of the image contents of context.

NOTE:
This occurs in many places, including:

If alphaMode is "opaque", this incurs a clear of the alpha channel. Implementations may skip this step when they are able to read or display images in a way that ignores the alpha channel.

If an application needs a canvas only for interop (not presentation), avoid "opaque" if it is not needed.

When updating the rendering of a WebGPU canvas (an HTMLCanvasElement or an OffscreenCanvas with a placeholder canvas element) with a GPUCanvasContext context, which occurs before getting the canvas’s image contents, in the following sub-steps of the event loop processing model:

Note: Service and Shared workers do not have "update the rendering" steps because they cannot render to user-visible canvases. requestAnimationFrame() is not exposed in ServiceWorkerGlobalScope and SharedWorkerGlobalScope, and OffscreenCanvases from transferControlToOffscreen() cannot be sent to these workers.

Run the following content timeline steps:

  1. Expire the current texture of context.

    Note: If this already happened in the task queued by getCurrentTexture(), it has no effect.

  2. Set context.[[lastPresentedImage]] to context.[[drawingBuffer]].

    Note: This is just a reference, not a copy; the drawing buffer’s contents can’t change in-place after the current texture has expired.

Note: This does not happen for standalone OffscreenCanvases (created by new OffscreenCanvas()).

transferToImageBitmap from WebGPU:

When transferToImageBitmap() is called on a canvas with GPUCanvasContext context, after creating an ImageBitmap from the canvas’s bitmap, run the following content timeline steps:

  1. Replace the drawing buffer of context.

Note: This makes transferToImageBitmap() equivalent to "moving" (and possibly alpha-clearing) the image contents into the ImageBitmap, without a copy.

21.4. GPUCanvasConfiguration

The supported context formats are the set of GPUTextureFormats: «"bgra8unorm", "rgba8unorm", "rgba16float"». These formats must be supported when specified as a GPUCanvasConfiguration.format regardless of the given GPUCanvasConfiguration.device.

Note: Canvas configuration cannot use srgb formats like "bgra8unorm-srgb". Instead, use the non-srgb equivalent ("bgra8unorm"), specify the srgb format in the viewFormats, and use createView() to create a view with an srgb format.

enum GPUCanvasAlphaMode {
    "opaque",
    "premultiplied",
};

enum GPUCanvasToneMappingMode {
    "standard",
    "extended",
};

dictionary GPUCanvasToneMapping {
  GPUCanvasToneMappingMode mode = "standard";
};

dictionary GPUCanvasConfiguration {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.RENDER_ATTACHMENT
    sequence<GPUTextureFormat> viewFormats = [];
    PredefinedColorSpace colorSpace = "srgb";
    GPUCanvasToneMapping toneMapping = {};
    GPUCanvasAlphaMode alphaMode = "opaque";
};

GPUCanvasConfiguration has the following members:

device, of type GPUDevice

The GPUDevice that textures returned by getCurrentTexture() will be compatible with.

format, of type GPUTextureFormat

The format that textures returned by getCurrentTexture() will have. Must be one of the Supported context formats.

usage, of type GPUTextureUsageFlags, defaulting to 0x10

The usage that textures returned by getCurrentTexture() will have. RENDER_ATTACHMENT is the default, but is not automatically included if the usage is explicitly set. Be sure to include RENDER_ATTACHMENT when setting a custom usage if you wish to use textures returned by getCurrentTexture() as color targets for a render pass.

viewFormats, of type sequence<GPUTextureFormat>, defaulting to []

The formats that views created from textures returned by getCurrentTexture() may use.

colorSpace, of type PredefinedColorSpace, defaulting to "srgb"

The color space that values written into textures returned by getCurrentTexture() should be displayed with.

toneMapping, of type GPUCanvasToneMapping, defaulting to {}

The tone mapping determines how the content of textures returned by getCurrentTexture() are to be displayed.

Note: If an implementation doesn’t support HDR WebGPU canvases, it should also not expose this member, to allow for feature detection. See getConfiguration().

alphaMode, of type GPUCanvasAlphaMode, defaulting to "opaque"

Determines the effect that alpha values will have on the content of textures returned by getCurrentTexture() when read, displayed, or used as an image source.

Configure a GPUCanvasContext to be used with a specific GPUDevice, using the preferred format for this context:
const canvas = document.createElement('canvas');
const context = canvas.getContext('webgpu');

context.configure({
    device: gpuDevice,
    format: navigator.gpu.getPreferredCanvasFormat(),
});
The GPUTextureDescriptor for the canvas and configuration( (HTMLCanvasElement or OffscreenCanvas) canvas, GPUCanvasConfiguration configuration) is a GPUTextureDescriptor with the following members:

and other members set to their defaults.

canvas.width refers to HTMLCanvasElement.width or OffscreenCanvas.width. canvas.height refers to HTMLCanvasElement.height or OffscreenCanvas.height.

21.4.1. Canvas Color Space

During presentation, the color values in the canvas are converted to the color space of the screen.

The toneMapping determines the handling of values outside of the [0, 1] interval in the color space of the screen.

21.4.2. Canvas Context sizing

All canvas configuration is set in configure() except for the resolution of the canvas, which is set by the canvas’s width and height.

Note: Like WebGL and 2d canvas, resizing a WebGPU canvas loses the current contents of the drawing buffer. In WebGPU, it does so by replacing the drawing buffer.

When an HTMLCanvasElement or OffscreenCanvas canvas with a GPUCanvasContext context has its width or height attributes set, update the canvas size by running the following content timeline steps:
  1. Replace the drawing buffer of context.

  2. Let configuration be context.[[configuration]]

  3. If configuration is not null:

    1. Set context.[[textureDescriptor]] to the GPUTextureDescriptor for the canvas and configuration(canvas, configuration).

Note: This may result in a GPUTextureDescriptor which exceeds the maxTextureDimension2D of the device. In this case, validation will fail inside getCurrentTexture().

Note: This algorithm is run any time the canvas width or height attributes are set, even if their value is not changed.

21.5. GPUCanvasToneMappingMode

This enum specifies how color values are displayed to the screen.

"standard"

Color values within the standard dynamic range of the screen are unchanged, and all other color values are projected to the standard dynamic range of the screen.

Note: This projection is often accomplished by clamping color values in the color space of the screen to the [0, 1] interval.

For example, suppose that the value (1.035, -0.175, -0.140) is written to an 'srgb' canvas.

If this is presented to an sRGB screen, then this will be converted to sRGB (which is a no-op, because the canvas is sRGB), then projected into the display’s space. Using component-wise clamping, this results in the sRGB value (1.0, 0.0, 0.0).

If this is presented to a Display P3 screen, then this will be converted to the value (0.948, 0.106, 0.01) in the Display P3 color space, and no clamping will be needed.

"extended"

Color values in the extended dynamic range of the screen are unchanged, and all other color values are projected to the extended dynamic range of the screen.

Note: This projection is often accomplished by clamping color values in the color space of the screen to the interval of values that the screen is capable of displaying, which may include values greater than 1.

For example, suppose that the value (2.5, -0.15, -0.15) is written to an 'srgb' canvas.

If this is presented to an sRGB screen that is capable of displaying values in the [0, 4] interval in sRGB space, then this will be converted to sRGB (which is a no-op, because the canvas is sRGB), then projected into the display’s space. If using component-wise clamping, this results in the sRGB value (2.5, 0.0, 0.0).

If this is presented to a Display P3 screen that is capable of displaying values in the [0, 2] interval in Display P3 space, then this will be converted to the value (2.3, 0.545, 0.386) in the Display P3 color space, then projected into the display’s space. If using component-wise clamping, this results in the Display P3 value (2.0, 0.545, 0.386).

21.6. GPUCanvasAlphaMode

This enum selects how the contents of the canvas will be interpreted when read, when displayed to the screen or used as an image source (in drawImage, toDataURL, etc.)

Below, src is a value in the canvas texture, and dst is an image that the canvas is being composited into (e.g. an HTML page rendering, or a 2D canvas).

"opaque"

Read RGB as opaque and ignore alpha values. If the content is not already opaque, the alpha channel is cleared to 1.0 in "get a copy of the image contents of a context".

"premultiplied"

Read RGBA as premultiplied: color values are premultiplied by their alpha value. 100% red at 50% alpha is [0.5, 0, 0, 0.5].

If the canvas texture contains out-of-gamut premultiplied RGBA values at the time the canvas contents are read, the behavior depends on whether the canvas is:

used as an image source

Values are preserved, as described in color space conversion.

displayed to the screen

Compositing results are undefined.

Note: This is true even if color space conversion would produce in-gamut values before compositing, because the intermediate format for compositing is not specified.

22. Errors & Debugging

During the normal course of operation of WebGPU, errors are raised via dispatch error.

After a device is lost, errors are no longer surfaced, where possible. After this point, implementations do not need to run validation or error tracking:

22.1. Fatal Errors

enum GPUDeviceLostReason {
    "unknown",
    "destroyed",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUDeviceLostInfo {
    readonly attribute GPUDeviceLostReason reason;
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

GPUDevice has the following additional attributes:

lost, of type Promise<GPUDeviceLostInfo>, readonly

A slot-backed attribute holding a promise which is created with the device, remains pending for the lifetime of the device, then resolves when the device is lost.

Upon initialization, it is set to a new promise.

22.2. GPUError

[Exposed=(Window, Worker), SecureContext]
interface GPUError {
    readonly attribute DOMString message;
};

GPUError is the base interface for all errors surfaced from popErrorScope() and the uncapturederror event.

Errors must only be generated for operations that explicitly state the conditions one may be generated under in their respective algorithms, and the subtype of error that is generated.

No errors are generated from a device which is lost. See § 22 Errors & Debugging.

Note: GPUError may gain new subtypes in future versions of this spec. Applications should handle this possibility, using only the error’s message when possible, and specializing using instanceof. Use error.constructor.name when it’s necessary to serialize an error (e.g. into JSON, for a debug report).

GPUError has the following immutable properties:

message, of type DOMString, readonly

A human-readable, localizable text message providing information about the error that occurred.

Note: This message is generally intended for application developers to debug their applications and capture information for debug reports, not to be surfaced to end-users.

Note: User agents should not include potentially machine-parsable details in this message, such as free system memory on "out-of-memory" or other details about the conditions under which memory was exhausted.

Note: The message should follow the best practices for language and direction information. This includes making use of any future standards which may emerge regarding the reporting of string language and direction metadata.

Editorial note: At the time of this writing, no language/direction recommendation is available that provides compatibility and consistency with legacy APIs, but when there is, adopt it formally.

[Exposed=(Window, Worker), SecureContext]
interface GPUValidationError
        : GPUError {
    constructor(DOMString message);
};

GPUValidationError is a subtype of GPUError which indicates that an operation did not satisfy all validation requirements. Validation errors are always indicative of an application error, and is expected to fail the same way across all devices assuming the same [[features]] and [[limits]] are in use.

To generate a validation error for GPUDevice device, run the following steps:

Device timeline steps:

  1. Let error be a new GPUValidationError with an appropriate error message.

  2. Dispatch error error to device.

[Exposed=(Window, Worker), SecureContext]
interface GPUOutOfMemoryError
        : GPUError {
    constructor(DOMString message);
};

GPUOutOfMemoryError is a subtype of GPUError which indicates that there was not enough free memory to complete the requested operation. The operation may succeed if attempted again with a lower memory requirement (like using smaller texture dimensions), or if memory used by other resources is released first.

To generate an out-of-memory error for GPUDevice device, run the following steps:

Device timeline steps:

  1. Let error be a new GPUOutOfMemoryError with an appropriate error message.

  2. Dispatch error error to device.

[Exposed=(Window, Worker), SecureContext]
interface GPUInternalError
        : GPUError {
    constructor(DOMString message);
};

GPUInternalError is a subtype of GPUError which indicates than an operation failed for a system or implementation-specific reason even when all validation requirements have been satisfied. For example, the operation may exceed the capabilities of the implementation in a way not easily captured by the supported limits. The same operation may succeed on other devices or under difference circumstances.

To generate an internal error for GPUDevice device, run the following steps:

Device timeline steps:

  1. Let error be a new GPUInternalError with an appropriate error message.

  2. Dispatch error error to device.

22.3. Error Scopes

A GPU error scope captures GPUErrors that were generated while the GPU error scope was current. Error scopes are used to isolate errors that occur within a set of WebGPU calls, typically for debugging purposes or to make an operation more fault tolerant.

GPU error scope has the following device timeline properties:

[[errors]], of type list<GPUError>, initially []

The GPUErrors, if any, observed while the GPU error scope was current.

[[filter]], of type GPUErrorFilter

Determines what type of GPUError this GPU error scope observes.

enum GPUErrorFilter {
    "validation",
    "out-of-memory",
    "internal",
};

partial interface GPUDevice {
    undefined pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};

GPUErrorFilter defines the type of errors that should be caught when calling pushErrorScope():

"validation"

Indicates that the error scope will catch a GPUValidationError.

"out-of-memory"

Indicates that the error scope will catch a GPUOutOfMemoryError.

"internal"

Indicates that the error scope will catch a GPUInternalError.

GPUDevice has the following device timeline properties:

[[errorScopeStack]], of type stack<GPU error scope>

A stack of GPU error scopes that have been pushed to the GPUDevice.

The current error scope for a GPUError error and GPUDevice device is determined by issuing the following steps to the device timeline of device:

Device timeline steps:

  1. If error is an instance of:

    GPUValidationError

    Let type be "validation".

    GPUOutOfMemoryError

    Let type be "out-of-memory".

    GPUInternalError

    Let type be "internal".

  2. Let scope be the last item of device.[[errorScopeStack]].

  3. While scope is not undefined:

    1. If scope.[[filter]] is type, return scope.

    2. Set scope to the previous item of device.[[errorScopeStack]].

  4. Return undefined.

To dispatch an error GPUError error on GPUDevice device, run the following device timeline steps:
Device timeline steps:

Note: No errors are generated from a device which is lost. If this algorithm is called while device is lost, it will not be observable to the application. See § 22 Errors & Debugging.

  1. Let scope be the current error scope for error and device.

  2. If scope is not undefined:

    1. Append error to scope.[[errors]].

    2. Return.

  3. Otherwise issue the following steps to the content timeline:

Content timeline steps:
  1. If the user agent chooses, queue a global task for GPUDevice device with the following steps:

    1. Fire a GPUUncapturedErrorEvent named "uncapturederror" on device, with an error of error.

Note: After dispatching the event, user agents should surface uncaptured errors to developers, for example as warnings in the browser’s developer console, unless the event’s defaultPrevented is true. In other words, calling preventDefault() on the event should silence the console warning.

Note: The user agent may choose to throttle or limit the number of GPUUncapturedErrorEvents that a GPUDevice can raise to prevent an excessive amount of error handling or logging from impacting performance.

pushErrorScope(filter)

Pushes a new GPU error scope onto the [[errorScopeStack]] for this.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.pushErrorScope(filter) method.
Parameter Type Nullable Optional Description
filter GPUErrorFilter Which class of errors this error scope observes.

Returns: undefined

Content timeline steps:

  1. Issue the subsequent steps on the Device timeline of this.

Device timeline steps:
  1. Let scope be a new GPU error scope.

  2. Set scope.[[filter]] to filter.

  3. Push scope onto this.[[errorScopeStack]].

popErrorScope()

Pops a GPU error scope off the [[errorScopeStack]] for this and resolves to any GPUError observed by the error scope, or null if none.

There is no guarantee of the ordering of promise resolution.

Called on: GPUDevice this.

Returns: Promise<GPUError?>

Content timeline steps:

  1. Let contentTimeline be the current Content timeline.

  2. Let promise be a new promise.

  3. Issue the check steps on the Device timeline of this.

  4. Return promise.

Device timeline check steps:
  1. If this is lost:

    1. Issue the following steps on contentTimeline:

      Content timeline steps:
      1. Resolve promise with null.

    2. Return.

    Note: No errors are generated from a device which is lost. See § 22 Errors & Debugging.

  2. If any of the following requirements are unmet:

    Then issue the following steps on contentTimeline and return:

    Content timeline steps:
    1. Reject promise with an OperationError.

  3. Let scope be the result of popping an item off of this.[[errorScopeStack]].

  4. Let error be any one of the items in scope.[[errors]], or null if there are none.

    For any two errors E1 and E2 in the list, if E2 was caused by E1, E2 should not be the one selected.

    Note: For example, if E1 comes from t = createTexture(), and E2 comes from t.createView() because t was invalid, E1 should be be preferred since it will be easier for a developer to understand what went wrong. Since both of these are GPUValidationErrors, the only difference will be in the message field, which is meant only to be read by humans anyway.

  5. At an unspecified point now or in the future, issue the subsequent steps on contentTimeline.

    Note: By allowing popErrorScope() calls to resolve in any order, with any of the errors observed by the scope, this spec allows validation to complete out of order, as long as any state observations are made at the appropriate point in adherence to this spec. For example, this allows implementations to perform shader compilation, which depends only on non-stateful inputs, to be completed on a background thread in parallel with other device-timeline work, and report any resulting errors later.

Content timeline steps:
  1. Resolve promise with error.

Using error scopes to capture validation errors from a GPUDevice operation that may fail:
gpuDevice.pushErrorScope('validation');

let sampler = gpuDevice.createSampler({
    maxAnisotropy: 0, // Invalid, maxAnisotropy must be at least 1.
});

gpuDevice.popErrorScope().then((error) => {
    if (error) {
        // There was an error creating the sampler, so discard it.
        sampler = null;
        console.error(`An error occured while creating sampler: ${error.message}`);
    }
});
NOTE:
Error scopes can encompass as many commands as needed. The number of commands an error scope covers will generally be correlated to what sort of action the application intends to take in response to an error occuring.

For example: An error scope that only contains the creation of a single resource, such as a texture or buffer, can be used to detect failures such as out of memory conditions, in which case the application may try freeing some resources and trying the allocation again.

Error scopes do not identify which command failed, however. So, for instance, wrapping all the commands executed while loading a model in a single error scope will not offer enough granularity to determine if the issue was due to memory constraints. As a result freeing resources would usually not be a productive response to a failure of that scope. A more appropriate response would be to allow the application to fall back to a different model or produce a warning that the model could not be loaded. If responding to memory constraints is desired, the operations allocating memory can always be wrapped in a smaller nested error scope.

22.4. Telemetry

When a GPUError is generated that is not observed by any GPU error scope, the user agent may fire an event named uncapturederror at a GPUDevice using GPUUncapturedErrorEvent.

Note: uncapturederror events are intended to be used for telemetry and reporting unexpected errors. They may not be dispatched for all uncaptured errors (for example, there may be a limit on the number of errors surfaced), and should not be used for handling known error cases that may occur during normal operation of an application. Prefer using pushErrorScope() and popErrorScope() in those cases.

[Exposed=(Window, Worker), SecureContext]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    [SameObject] readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};

GPUUncapturedErrorEvent has the following attributes:

error, of type GPUError, readonly

A slot-backed attribute holding an object representing the error that was uncaptured. This has the same type as errors returned by popErrorScope().

partial interface GPUDevice {
    attribute EventHandler onuncapturederror;
};

GPUDevice has the following content timeline properties:

onuncapturederror, of type EventHandler

An event handler IDL attribute for the uncapturederror event type.

Listening for uncaptured errors from a GPUDevice:
gpuDevice.addEventListener('uncapturederror', (event) => {
    // Re-surface the error, because adding an event listener may silence console logs.
    console.error('A WebGPU error was not captured:', event.error);

    myEngineDebugReport.uncapturedErrors.push({
        type: event.error.constructor.name,
        message: event.error.message,
    });
});

23. Detailed Operations

This section describes the details of various GPU operations.

23.1. Computing

Computing operations provide direct access to GPU’s programmable hardware. Compute shaders do not have shader stage inputs or outputs; their results are side effects from writing data into storage bindings bound either as GPUBufferBindingLayout with GPUBufferBindingType "storage" or as GPUStorageTextureBindingLayout. These operations are encoded within GPUComputePassEncoder as:

The main compute algorithm:

compute(descriptor, dispatchCall)

Arguments:

  1. Let computeInvocations be an empty list.

  2. Let computeStage be descriptor.compute.

  3. Let workgroupSize be the computed workgroup size for computeStage.entryPoint after applying computeStage.constants to computeStage.module.

  4. For workgroupX in range [0, dispatchCall.workgroupCountX]:

    1. For workgroupY in range [0, dispatchCall.workgroupCountY]:

      1. For workgroupZ in range [0, dispatchCall.workgroupCountZ]:

        1. For localX in range [0, workgroupSize.x]:

          1. For localY in range [0, workgroupSize.y]:

            1. For localZ in range [0, workgroupSize.y]:

              1. Let invocation be { computeStage, workgroupX, workgroupY, workgroupZ, localX, localY, localZ }

              2. Append invocation to computeInvocations.

  5. For every invocation in computeInvocations, in any order the device chooses, including in parallel:

    1. Set the shader builtins:

      • Set the num_workgroups builtin, if any, to (
        dispatchCall.workgroupCountX,
        dispatchCall.workgroupCountY,
        dispatchCall.workgroupCountZ
        )

      • Set the workgroup_id builtin, if any, to (
        invocation.workgroupX,
        invocation.workgroupY,
        invocation.workgroupZ
        )

      • Set the local_invocation_id builtin, if any, to (
        invocation.localX,
        invocation.localY,
        invocation.localZ
        )

      • Set the global_invocation_id builtin, if any, to (
        invocation.workgroupX * workgroupSize.x + invocation.localX,
        invocation.workgroupY * workgroupSize.y + invocation.localY,
        invocation.workgroupZ * workgroupSize.z + invocation.localZ
        )
        .

      • Set the local_invocation_index builtin, if any, to invocation.localX + (invocation.localY * workgroupSize.x) + (invocation.localZ * workgroupSize.x * workgroupSize.y)

    2. Invoke the compute shader entry point described by invocation.computeStage.

Note: Shader invocations have no guaranteed order, and will generally run in parallel according to device capabilities. Developers should not assume that any given invocation or workgroup will complete before any other one is started. Some devices may appear to execute in a consistent order, but this behavior should not be relied on as it will not perform identically across all devices. Shaders that require synchronization across invocations must use Synchronization Built-in Functions to coordinate execution.

The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.

23.2. Rendering

Rendering is done by a set of GPU operations that are executed within GPURenderPassEncoder, and result in modifications of the texture data, viewed by the render pass attachments. These operations are encoded with:

Note: rendering is the traditional use of GPUs, and is supported by multiple fixed-function blocks in hardware.

The main rendering algorithm:

render(pipeline, drawCall, state)

Arguments:

  1. Let descriptor be pipeline.[[descriptor]].

  2. Resolve indices. See § 23.2.1 Index Resolution.

    Let vertexList be the result of resolve indices(drawCall, state).

  3. Process vertices. See § 23.2.2 Vertex Processing.

    Execute process vertices(vertexList, drawCall, descriptor.vertex, state).

  4. Assemble primitives. See § 23.2.3 Primitive Assembly.

    Execute assemble primitives(vertexList, drawCall, descriptor.primitive).

  5. Clip primitives. See § 23.2.4 Primitive Clipping.

    Let primitiveList be the result of this stage.

  6. Rasterize. See § 23.2.5 Rasterization.

    Let rasterizationList be the result of rasterize(primitiveList, state).

  7. Process fragments. See § 23.2.6 Fragment Processing.

    Gather a list of fragments, resulting from executing process fragment(rasterPoint, descriptor, state) for each rasterPoint in rasterizationList.

  8. Write pixels. See § 23.2.7 Output Merging.

    For each non-null fragment of fragments:

23.2.1. Index Resolution

At the first stage of rendering, the pipeline builds a list of vertices to process for each instance.

resolve indices(drawCall, state)

Arguments:

Returns: list of integer indices.

  1. Let vertexIndexList be an empty list of indices.

  2. If drawCall is an indexed draw call:

    1. Initialize the vertexIndexList with drawCall.indexCount integers.

    2. For i in range 0 .. drawCall.indexCount (non-inclusive):

      1. Let relativeVertexIndex be fetch index(i + drawCall.firstIndex, state.[[index_buffer]]).

      2. If relativeVertexIndex has the special value "out of bounds", return the empty list.

        Note: Implementations may choose to display a warning when this occurs, especially when it is easy to detect (like in non-indirect indexed draw calls).

      3. Append drawCall.baseVertex + relativeVertexIndex to the vertexIndexList.

  3. Otherwise:

    1. Initialize the vertexIndexList with drawCall.vertexCount integers.

    2. Set each vertexIndexList item i to the value drawCall.firstVertex + i.

  4. Return vertexIndexList.

Note: in the case of indirect draw calls, the indexCount, vertexCount, and other properties of drawCall are read from the indirect buffer instead of the draw command itself.

fetch index(i, buffer, offset, format)

Arguments:

Returns: unsigned integer or "out of bounds"

  1. Let indexSize be defined by the state.[[index_format]]:

    "uint16"

    2

    "uint32"

    4

  2. If state.[[index_buffer_offset]] + |i + 1| × indexSize > state.[[index_buffer_size]], return the special value "out of bounds".

  3. Interpret the data in state.[[index_buffer]], starting at offset state.[[index_buffer_offset]] + i × indexSize, of size indexSize bytes, as an unsigned integer and return it.

23.2.2. Vertex Processing

Vertex processing stage is a programmable stage of the render pipeline that processes the vertex attribute data, and produces clip space positions for § 23.2.4 Primitive Clipping, as well as other data for the § 23.2.6 Fragment Processing.

process vertices(vertexIndexList, drawCall, desc, state)

Arguments:

Each vertex vertexIndex in the vertexIndexList, in each instance of index rawInstanceIndex, is processed independently. The rawInstanceIndex is in range from 0 to drawCall.instanceCount - 1, inclusive. This processing happens in parallel, and any side effects, such as writes into GPUBufferBindingType "storage" bindings, may happen in any order.

  1. Let instanceIndex be rawInstanceIndex + drawCall.firstInstance.

  2. For each non-null vertexBufferLayout in the list of desc.buffers:

    1. Let i be the index of the buffer layout in this list.

    2. Let vertexBuffer, vertexBufferOffset, and vertexBufferBindingSize be the buffer, offset, and size at slot i of state.[[vertex_buffers]].

    3. Let vertexElementIndex be dependent on vertexBufferLayout.stepMode:

      "vertex"

      vertexIndex

      "instance"

      instanceIndex

    4. Let drawCallOutOfBounds be false.

    5. For each attributeDesc in vertexBufferLayout.attributes:

      1. Let attributeOffset be vertexBufferOffset + vertexElementIndex * vertexBufferLayout.arrayStride + attributeDesc.offset.

      2. If attributeOffset + byteSize(attributeDesc.format) > vertexBufferOffset + vertexBufferBindingSize:

        1. Set drawCallOutOfBounds to true.

        2. Optionally (implementation-defined), empty vertexIndexList and return, cancelling the draw call.

          Note: This allows implementations to detect out-of-bounds values in the index buffer before issuing a draw call, instead of using invalid memory reference behavior.

    6. For each attributeDesc in vertexBufferLayout.attributes:

      1. If drawCallOutOfBounds is true:

        1. Load the attribute data according to WGSL’s invalid memory reference behavior, from vertexBuffer.

          Note: Invalid memory reference allows several behaviors, including actually loading the "correct" result for an attribute that is in-bounds, even when the draw-call-wide drawCallOutOfBounds is true.

        Otherwise:

        1. Let attributeOffset be vertexBufferOffset + vertexElementIndex * vertexBufferLayout.arrayStride + attributeDesc.offset.

        2. Load the attribute data of format attributeDesc.format from vertexBuffer starting at offset attributeOffset. The components are loaded in the order x, y, z, w from buffer memory.

      2. Convert the data into a shader-visible format, according to channel formats rules.

        An attribute of type "snorm8x2" and byte values of [0x70, 0xD0] will be converted to vec2<f32>(0.88, -0.38) in WGSL.
      3. Adjust the data size to the shader type:

        • if both are scalar, or both are vectors of the same dimensionality, no adjustment is needed.

        • if data is vector but the shader type is scalar, then only the first component is extracted.

        • if both are vectors, and data has a higher dimension, the extra components are dropped.

          An attribute of type "float32x3" and value vec3<f32>(1.0, 2.0, 3.0) will exposed to the shader as vec2<f32>(1.0, 2.0) if a 2-component vector is expected.
        • if the shader type is a vector of higher dimensionality, or the data is a scalar, then the missing components are filled from vec4<*>(0, 0, 0, 1) value.

          An attribute of type "sint32" and value 5 will be exposed to the shader as vec4<i32>(5, 0, 0, 1) if a 4-component vector is expected.
      4. Bind the data to vertex shader input location attributeDesc.shaderLocation.

  3. For each GPUBindGroup group at index in state.[[bind_groups]]:

    1. For each resource GPUBindingResource in the bind group:

      1. Let entry be the corresponding GPUBindGroupLayoutEntry for this resource.

      2. If entry.visibility includes VERTEX:

  4. Set the shader builtins:

    • Set the vertex_index builtin, if any, to vertexIndex.

    • Set the instance_index builtin, if any, to instanceIndex.

  5. Invoke the vertex shader entry point described by desc.

    Note: The target platform caches the results of vertex shader invocations. There is no guarantee that any vertexIndex that repeats more than once will result in multiple invocations. Similarly, there is no guarantee that a single vertexIndex will only be processed once.

    The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.

23.2.3. Primitive Assembly

Primitives are assembled by a fixed-function stage of GPUs.

assemble primitives(vertexIndexList, drawCall, desc)

Arguments:

For each instance, the primitives get assembled from the vertices that have been processed by the shaders, based on the vertexIndexList.

  1. First, if the primitive topology is a strip, (which means that desc.stripIndexFormat is not undefined) and the drawCall is indexed, the vertexIndexList is split into sub-lists using the maximum value of desc.stripIndexFormat as a separator.

    Example: a vertexIndexList with values [1, 2, 65535, 4, 5, 6] of type "uint16" will be split in sub-lists [1, 2] and [4, 5, 6].

  2. For each of the sub-lists vl, primitive generation is done according to the desc.topology:

    "line-list"

    Line primitives are composed from (vl.0, vl.1), then (vl.2, vl.3), then (vl.4 to vl.5), etc. Each subsequent primitive takes 2 vertices.

    "line-strip"

    Line primitives are composed from (vl.0, vl.1), then (vl.1, vl.2), then (vl.2, vl.3), etc. Each subsequent primitive takes 1 vertex.

    "triangle-list"

    Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.3, vl.4, vl.5), then (vl.6, vl.7, vl.8), etc. Each subsequent primitive takes 3 vertices.

    "triangle-strip"

    Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.2, vl.1, vl.3), then (vl.2, vl.3, vl.4), then (vl.4, vl.3, vl.5), etc. Each subsequent primitive takes 1 vertices.

    Any incomplete primitives are dropped.

23.2.4. Primitive Clipping

Vertex shaders have to produce a built-in position (of type vec4<f32>), which denotes the clip position of a vertex in clip space coordinates.

Primitives are clipped to the clip volume, which, for any clip position p inside a primitive, is defined by the following inequalities:

When the "clip-distances" feature is enabled, this clip volume can be further restricted by user-defined half-spaces by declaring clip_distances in the output of vertex stage. Each value in the clip_distances array will be linearly interpolated across the primitive, and the portion of the primitive with interpolated distances less than 0 will be clipped.

If descriptor.primitive.unclippedDepth is true, depth clipping is not applied: the clip volume is not bounded in the z dimension.

A primitive passes through this stage unchanged if every one of its edges lie entirely inside the clip volume. If the edges of a primitives intersect the boundary of the clip volume, the intersecting edges are reconnected by new edges that lie along the boundary of the clip volume. For triangular primitives (descriptor.primitive.topology is "triangle-list" or "triangle-strip"), this reconnection may result in introduction of new vertices into the polygon, internally.

If a primitive intersects an edge of the clip volume’s boundary, the clipped polygon must include a point on this boundary edge.

If the vertex shader outputs other floating-point values (scalars and vectors), qualified with "perspective" interpolation, they also get clipped. The output values associated with a vertex that lies within the clip volume are unaffected by clipping. If a primitive is clipped, however, the output values assigned to vertices produced by clipping are clipped.

Considering an edge between vertices a and b that got clipped, resulting in the vertex c, let’s define t to be the ratio between the edge vertices: c.p = t × a.p + (1 − t) × b.p, where x.p is the output clip position of a vertex x.

For each vertex output value "v" with a corresponding fragment input, a.v and b.v would be the outputs for a and b vertices respectively. The clipped shader output c.v is produced based on the interpolation qualifier:

flat

Flat interpolation is unaffected, and is based on the provoking vertex, which is determined by the interpolation sampling mode declared in the shader. The output value is the same for the whole primitive, and matches the vertex output of the provoking vertex.

linear

The interpolation ratio gets adjusted against the perspective coordinates of the clip positions, so that the result of interpolation is linear in screen space.

perspective

The value is linearly interpolated in clip space, producing perspective-correct values.

The result of primitive clipping is a new set of primitives, which are contained within the clip volume.

23.2.5. Rasterization

Rasterization is the hardware processing stage that maps the generated primitives to the 2-dimensional rendering area of the framebuffer - the set of render attachments in the current GPURenderPassEncoder. This rendering area is split into an even grid of pixels.

The framebuffer coordinates start from the top-left corner of the render targets. Each unit corresponds exactly to one pixel. See § 3.3 Coordinate Systems for more information.

Rasterization determines the set of pixels affected by a primitive. In case of multi-sampling, each pixel is further split into descriptor.multisample.count samples. The standard sample patterns are as follows, with positions in framebuffer coordinates relative to the top-left corner of the pixel, such that the pixel ranges from (0, 0) to (1, 1):

multisample.count Sample positions
1 Sample 0: (0.5, 0.5)
4 Sample 0: (0.375, 0.125)
Sample 1: (0.875, 0.375)
Sample 2: (0.125, 0.625)
Sample 3: (0.625, 0.875)

Implementations must use the standard sample pattern for the given multisample.count when performing rasterization.

Let’s define a FragmentDestination to contain:

position

the 2D pixel position using framebuffer coordinates

sampleIndex

an integer in case § 23.2.10 Per-Sample Shading is active, or null otherwise

We’ll also use a notion of normalized device coordinates, or NDC. In this coordinate system, the viewport bounds range in X and Y from -1 to 1, and in Z from 0 to 1.

Rasterization produces a list of RasterizationPoints, each containing the following data:

destination

refers to FragmentDestination

coverageMask

refers to multisample coverage mask (see § 23.2.11 Sample Masking)

frontFacing

is true if it’s a point on the front face of a primitive

perspectiveDivisor

refers to interpolated 1.0 ÷ W across the primitive

depth

refers to the depth in viewport coordinates, i.e. between the [[viewport]] minDepth and maxDepth.

primitiveVertices

refers to the list of vertex outputs forming the primitive

barycentricCoordinates

refers to § 23.2.5.3 Barycentric coordinates

rasterize(primitiveList, state)

Arguments:

Returns: list of RasterizationPoint.

Each primitive in primitiveList is processed independently. However, the order of primitives affects later stages, such as depth/stencil operations and pixel writes.

  1. First, the clipped vertices are transformed into NDC - normalized device coordinates. Given the output position p, the NDC position and perspective divisor are:

    ndc(p) = vector(p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w)

    divisor(p) = 1.0 ÷ p.w

  2. Let vp be state.[[viewport]]. Map the NDC position n into viewport coordinates:

    • Compute framebuffer coordinates from the render target offset and size:

      framebufferCoords(n) = vector(vp.x + 0.5 × (n.x + 1) × vp.width, vp.y + 0.5 × (−n.y + 1) × vp.height)

    • Compute depth by linearly mapping [0,1] to the viewport depth range:

      depth(n) = vp.minDepth + n.z × ( vp.maxDepth - vp.minDepth )

  3. Let rasterizationPoints be the list of points, each having its attributes (divisor(p), framebufferCoords(n), depth(n), etc.) interpolated according to its position on the primitive, using the same interpolation as § 23.2.4 Primitive Clipping. If the attribute is user-defined (not a built-in output value) then the interpolation type specified by the @interpolate WGSL attribute is used.

  4. Proceed with a specific rasterization algorithm, depending on primitive.topology:

    "point-list"

    The point, if not filtered by § 23.2.4 Primitive Clipping, goes into § 23.2.5.1 Point Rasterization.

    "line-list" or "line-strip"

    The line cut by § 23.2.4 Primitive Clipping goes into § 23.2.5.2 Line Rasterization.

    "triangle-list" or "triangle-strip"

    The polygon produced in § 23.2.4 Primitive Clipping goes into § 23.2.5.4 Polygon Rasterization.

  5. Remove all the points rp from rasterizationPoints that have rp.destination.position outside of state.[[scissorRect]].

  6. Return rasterizationPoints.

23.2.5.1. Point Rasterization

A single FragmentDestination is selected within the pixel containing the framebuffer coordinates of the point.

The coverage mask depends on multi-sampling mode:

sample-frequency

coverageMask = 1 ≪ sampleIndex

pixel-frequency multi-sampling

coverageMask = 1 ≪ descriptor.multisample.count − 1

no multi-sampling

coverageMask = 1

23.2.5.2. Line Rasterization

The exact algorithm used for line rasterization is not defined, and may differ between implementations. For example, the line may be drawn using § 23.2.5.4 Polygon Rasterization of a 1px-width rectangle around the line segment, or using Bresenham’s line algorithm to select the FragmentDestinations.

Note: See Basic Line Segment Rasterization and Bresenham Line Segment Rasterization in the Vulkan 1.3 spec for more details of how line these line rasterization algorithms may be implemented.

23.2.5.3. Barycentric coordinates

Barycentric coordinates is a list of n numbers bi, defined for a point p inside a convex polygon with n vertices vi in framebuffer space. Each bi is in range 0 to 1, inclusive, and represents the proximity to vertex vi. Their sum is always constant:

∑ (bi) = 1

These coordinates uniquely specify any point p within the polygon (or on its boundary) as:

p = ∑ (bi × pi)

For a polygon with 3 vertices - a triangle, barycentric coordinates of any point p can be computed as follows:

Apolygon = A(v1, v2, v3) b1 = A(p, b2, b3) ÷ Apolygon b2 = A(b1, p, b3) ÷ Apolygon b3 = A(b1, b2, p) ÷ Apolygon

Where A(list of points) is the area of the polygon with the given set of vertices.

For polygons with more than 3 vertices, the exact algorithm is implementation-dependent. One of the possible implementations is to triangulate the polygon and compute the barycentrics of a point based on the triangle it falls into.

23.2.5.4. Polygon Rasterization

A polygon is front-facing if it’s oriented towards the projection. Otherwise, the polygon is back-facing.

rasterize polygon()

Arguments:

Returns: list of RasterizationPoint.

  1. Let rasterizationPoints be an empty list.

  2. Let v(i) be the framebuffer coordinates for the clipped vertex number i (starting with 1) in a rasterized polygon of n vertices.

    Note: this section uses the term "polygon" instead of a "triangle", since § 23.2.4 Primitive Clipping stage may have introduced additional vertices. This is non-observable by the application.

  3. Determine if the polygon is front-facing, which depends on the sign of the area occupied by the polygon in framebuffer coordinates:

    area = 0.5 × ((v1.x × vn.y − vn.x × v1.y) + ∑ (vi+1.x × vi.y − vi.x × vi+1.y))

    The sign of area is interpreted based on the primitive.frontFace:

    "ccw"

    area > 0 is considered front-facing, otherwise back-facing

    "cw"

    area < 0 is considered front-facing, otherwise back-facing

  4. Cull based on primitive.cullMode:

    "none"

    All polygons pass this test.

    "front"

    The front-facing polygons are discarded, and do not process in later stages of the render pipeline.

    "back"

    The back-facing polygons are discarded.

  5. Determine a set of fragments inside the polygon in framebuffer space - these are locations scheduled for the per-fragment operations. This operation is known as "point sampling". The logic is based on descriptor.multisample:

    disabled

    Fragments are associated with pixel centers. That is, all the points with coordinates C, where fract(C) = vector2(0.5, 0.5) in the framebuffer space, enclosed into the polygon, are included. If a pixel center is on the edge of the polygon, whether or not it’s included is not defined.

    Note: this becomes a subject of precision for the rasterizer.

    enabled

    Each pixel is associated with descriptor.multisample.count locations, which are implementation-defined. The locations are ordered, and the list is the same for each pixel of the framebuffer. Each location corresponds to one fragment in the multisampled framebuffer.

    The rasterizer builds a mask of locations being hit inside each pixel and provides is as "sample-mask" built-in to the fragment shader.

  6. For each produced fragment of type FragmentDestination:

    1. Let rp be a new RasterizationPoint object

    2. Compute the list b as § 23.2.5.3 Barycentric coordinates of that fragment. Set rp.barycentricCoordinates to b.

    3. Let di be the depth value of vi.

    4. Set rp.depth to ∑ (bi × di)

    5. Append rp to rasterizationPoints.

  7. Return rasterizationPoints.

23.2.6. Fragment Processing

The fragment processing stage is a programmable stage of the render pipeline that computes the fragment data (often a color) to be written into render targets.

This stage produces a Fragment for each RasterizationPoint:

process fragment(rp, descriptor, state)

Arguments:

Returns: Fragment or null.

  1. Let fragmentDesc be descriptor.fragment.

  2. Let depthStencilDesc be descriptor.depthStencil.

  3. Let fragment be a new Fragment object.

  4. Set fragment.destination to rp.destination.

  5. Set fragment.frontFacing to rp.frontFacing.

  6. Set fragment.coverageMask to rp.coverageMask.

  7. Set fragment.depth to rp.depth.

  8. If frag_depth builtin is not produced by the shader:

    1. Set fragment.depthPassed to the result of compare fragment(fragment.destination, fragment.depth, "depth", state.[[depthStencilAttachment]], depthStencilDesc?.depthCompare).

  9. Set stencilState to depthStencilDesc?.stencilFront if rp.frontFacing is true and depthStencilDesc?.stencilBack otherwise.

  10. Set fragment.stencilPassed to the result of compare fragment(fragment.destination, state.[[stencilReference]], "stencil", state.[[depthStencilAttachment]], stencilState?.compare).

  11. If fragmentDesc is not null:

    1. If fragment.depthPassed is false, the frag_depth builtin is not produced by the shader entry point, and the shader entry point does not write to any storage bindings, the following steps may be skipped.

    2. Set the shader input builtins. For each non-composite argument of the entry point, annotated as a builtin, set its value based on the annotation:

      position

      vec4<f32>(rp.destination.position, rp.depth, rp.perspectiveDivisor)

      front_facing

      rp.frontFacing

      sample_index

      rp.destination.sampleIndex

      sample_mask

      rp.coverageMask

    3. For each user-specified shader stage input of the fragment stage:

      1. Let value be the interpolated fragment input, based on rp.barycentricCoordinates, rp.primitiveVertices, and the interpolation qualifier on the input.

      2. Set the corresponding fragment shader location input to value.

    4. Invoke the fragment shader entry point described by fragmentDesc.

      The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.

    5. If the fragment issued discard, return null.

    6. Set fragment.colors to the user-specified shader stage output values from the shader.

    7. Take the shader output builtins:

      1. If frag_depth builtin is produced by the shader as value:

        1. Let vp be state.[[viewport]].

        2. Set fragment.depth to clamp(value, vp.minDepth, vp.maxDepth).

        3. Set fragment.depthPassed to the result of compare fragment(fragment.destination, fragment.depth, "depth", state.[[depthStencilAttachment]], depthStencilDesc?.depthCompare).

    8. If sample_mask builtin is produced by the shader as value:

      1. Set fragment.coverageMask to fragment.coverageMaskvalue.

    Otherwise we are in § 23.2.8 No Color Output mode, and fragment.colors is empty.

  12. Return fragment.

compare fragment(destination, value, aspect, attachment, compareFunc)

Arguments:

Returns: true if the comparison passes, or false otherwise

Processing of fragments happens in parallel, while any side effects, such as writes into GPUBufferBindingType "storage" bindings, may happen in any order.

23.2.7. Output Merging

Output merging is a fixed-function stage of the render pipeline that outputs the fragment color, depth and stencil data to be written into the render pass attachments.

process depth stencil(fragment, pipeline, state)

Arguments:

  1. Let depthStencilDesc be pipeline.[[descriptor]].depthStencil.

  2. If pipeline.[[writesDepth]] is true and fragment.depthPassed is true:

    1. Set the value of the depth aspect of state.[[depthStencilAttachment]] at fragment.destination to fragment.depth.

  3. If pipeline.[[writesStencil]] is true:

    1. Set stencilState to depthStencilDesc.stencilFront if fragment.frontFacing is true and depthStencilDesc.stencilBack otherwise.

    2. If fragment.stencilPassed is false:

      • Let stencilOp be stencilState.failOp.

      Else if fragment.depthPassed is false:

      Else:

      • Let stencilOp be stencilState.passOp.

    3. Update the value of the stencil aspect of state.[[depthStencilAttachment]] at fragment.destination by performing the operation described by stencilOp.

The depth input to this stage, if any, is clamped to the current [[viewport]] depth range (regardless of whether the fragment shader stage writes the frag_depth builtin).

process color attachments(fragment, pipeline, state)

Arguments:

  1. If fragment.depthPassed is false or fragment.stencilPassed is false, return.

  2. Let targets be pipeline.[[descriptor]].fragment.targets.

  3. For each attachment of state.[[colorAttachments]]:

    1. Let color be the value from fragment.colors that corresponds with attachment.

    2. Let targetDesc be the targets entry that corresponds with attachment.

    3. If targetDesc.blend is provided:

      1. Let colorBlend be targetDesc.blend.color.

      2. Let alphaBlend be targetDesc.blend.alpha.

      3. Set the RGB components of color to the value computed by performing the operation described by colorBlend.operation with the values described by colorBlend.srcFactor and colorBlend.dstFactor.

      4. Set the alpha component of color to the value computed by performing the operation described by alphaBlend.operation with the values described by alphaBlend.srcFactor and alphaBlend.dstFactor.

    4. Set the value of attachment at fragment.destination to color.

23.2.8. No Color Output

In no-color-output mode, pipeline does not produce any color attachment outputs.

The pipeline still performs rasterization and produces depth values based on the vertex position output. The depth testing and stencil operations can still be used.

23.2.9. Alpha to Coverage

In alpha-to-coverage mode, an additional alpha-to-coverage mask of MSAA samples is generated based on the alpha component of the fragment shader output value at @location(0).

The algorithm of producing the extra mask is platform-dependent and can vary for different pixels. It guarantees that:

23.2.10. Per-Sample Shading

When rendering into multisampled render attachments, fragment shaders can be run once per-pixel or once per-sample. Fragment shaders must run once per-sample if either the sample_index builtin or sample interpolation sampling is used and contributes to the shader output. Otherwise fragment shaders may run once per-pixel with the result broadcast out to each of the samples included in the final sample mask.

When using per-sample shading, the color output for sample N is produced by the fragment shader execution with sample_index == N for the current pixel.

23.2.11. Sample Masking

The final sample mask for a pixel is computed as: rasterization mask & mask & shader-output mask.

Only the lower count bits of the mask are considered.

If the least-significant bit at position N of the final sample mask has value of "0", the sample color outputs (corresponding to sample N) to all attachments of the fragment shader are discarded. Also, no depth test or stencil operations are executed on the relevant samples of the depth-stencil attachment.

The rasterization mask is produced by the rasterization stage, based on the shape of the rasterized polygon. The samples included in the shape get the relevant bits 1 in the mask.

The shader-output mask takes the output value of "sample_mask" builtin in the fragment shader. If the builtin is not output from the fragment shader, and alphaToCoverageEnabled is enabled, the shader-output mask becomes the alpha-to-coverage mask. Otherwise, it defaults to 0xFFFFFFFF.

24. Type Definitions

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

typedef unsigned long long GPUSize64Out;
typedef unsigned long GPUIntegerCoordinateOut;
typedef unsigned long GPUSize32Out;

typedef unsigned long GPUFlagsConstant;

24.1. Colors & Vectors

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

Note: double is large enough to precisely hold 32-bit signed/unsigned integers and single-precision floats.

r, of type double

The red channel value.

g, of type double

The green channel value.

b, of type double

The blue channel value.

a, of type double

The alpha channel value.

For a given GPUColor value color, depending on its type, the syntax:
validate GPUColor shape(color)

Arguments:

Returns: undefined

Content timeline steps:

  1. Throw a TypeError if color is a sequence and color.size ≠ 4.

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;
For a given GPUOrigin2D value origin, depending on its type, the syntax:
validate GPUOrigin2D shape(origin)

Arguments:

Returns: undefined

Content timeline steps:

  1. Throw a TypeError if origin is a sequence and origin.size > 2.

dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;
For a given GPUOrigin3D value origin, depending on its type, the syntax:
validate GPUOrigin3D shape(origin)

Arguments:

Returns: undefined

Content timeline steps:

  1. Throw a TypeError if origin is a sequence and origin.size > 3.

dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    GPUIntegerCoordinate height = 1;
    GPUIntegerCoordinate depthOrArrayLayers = 1;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;
width, of type GPUIntegerCoordinate

The width of the extent.

height, of type GPUIntegerCoordinate, defaulting to 1

The height of the extent.

depthOrArrayLayers, of type GPUIntegerCoordinate, defaulting to 1

The depth of the extent or the number of array layers it contains. If used with a GPUTexture with a GPUTextureDimension of "3d" defines the depth of the texture. If used with a GPUTexture with a GPUTextureDimension of "2d" defines the number of array layers in the texture.

For a given GPUExtent3D value extent, depending on its type, the syntax:
validate GPUExtent3D shape(extent)

Arguments:

Returns: undefined

Content timeline steps:

  1. Throw a TypeError if:

25. Feature Index

25.1. "core-features-and-limits"

Allows all Core WebGPU features and limits to be used.

Note: This is currently available on all adapters and enabled automatically on all devices even if not requested.

25.2. "depth-clip-control"

Allows depth clipping to be disabled.

This feature adds the following optional API surfaces:

25.3. "depth32float-stencil8"

Allows for explicit creation of textures of format "depth32float-stencil8".

This feature adds the following optional API surfaces:

25.4. "texture-compression-bc"

Allows for explicit creation of textures of BC compressed formats which include the "S3TC", "RGTC", and "BPTC" formats. Only supports 2D textures.

Note: Adapters which support "texture-compression-bc" do not always support "texture-compression-bc-sliced-3d". To use "texture-compression-bc-sliced-3d", "texture-compression-bc" must be enabled explicitly as this feature does not enable the BC formats.

This feature adds the following optional API surfaces:

25.5. "texture-compression-bc-sliced-3d"

Allows the 3d dimension for textures with BC compressed formats.

Note: Adapters which support "texture-compression-bc" do not always support "texture-compression-bc-sliced-3d". To use "texture-compression-bc-sliced-3d", "texture-compression-bc" must be enabled explicitly as this feature does not enable the BC formats.

This feature adds no optional API surfaces.

25.6. "texture-compression-etc2"

Allows for explicit creation of textures of ETC2 compressed formats. Only supports 2D textures.

This feature adds the following optional API surfaces:

25.7. "texture-compression-astc"

Allows for explicit creation of textures of ASTC compressed formats. Only supports 2D textures.

This feature adds the following optional API surfaces:

25.8. "texture-compression-astc-sliced-3d"

Allows the 3d dimension for textures with ASTC compressed formats.

Note: Adapters which support "texture-compression-astc" do not always support "texture-compression-astc-sliced-3d". To use "texture-compression-astc-sliced-3d", "texture-compression-astc" must be enabled explicitly as this feature does not enable the ASTC formats.

This feature adds no optional API surfaces.

25.9. "timestamp-query"

Adds the ability to query timestamps from GPU command buffers. See § 20.4 Timestamp Query.

This feature adds the following optional API surfaces:

25.10. "indirect-first-instance"

Allows the use of non-zero firstInstance values in indirect draw parameters and indirect drawIndexed parameters.

This feature adds no optional API surfaces.

25.11. "shader-f16"

Allows the use of the half-precision floating-point type f16 in WGSL.

This feature adds the following optional API surfaces:

25.12. "rg11b10ufloat-renderable"

Allows the RENDER_ATTACHMENT usage on textures with format "rg11b10ufloat", and also allows textures of that format to be blended, multisampled, and resolved.

This feature adds no optional API surfaces.

Enabling "texture-formats-tier1" at device creation will also enable "rg11b10ufloat-renderable".

25.13. "bgra8unorm-storage"

Allows the STORAGE_BINDING usage on textures with format "bgra8unorm".

This feature adds no optional API surfaces.

25.14. "float32-filterable"

Makes textures with formats "r32float", "rg32float", and "rgba32float" filterable.

25.15. "float32-blendable"

Makes textures with formats "r32float", "rg32float", and "rgba32float" blendable.

25.16. "clip-distances"

Allows the use of clip_distances in WGSL.

This feature adds the following optional API surfaces:

25.17. "dual-source-blending"

Allows the use of blend_src in WGSL and simultaneously using both pixel shader outputs (@blend_src(0) and @blend_src(1)) as inputs to a blending operation with the single color attachment at location 0.

This feature adds the following optional API surfaces:

25.18. "subgroups"

Allows the use of the subgroup and quad operations in WGSL.

This feature adds no optional API surfaces, but the following entries of GPUAdapterInfo expose real values whenever the feature is available on the adapter:

25.19. "texture-formats-tier1"

Supports the below new GPUTextureFormats with the RENDER_ATTACHMENT, blendable, multisampling capabilities and the STORAGE_BINDING capability with the "read-only" and "write-only" GPUStorageTextureAccesses:

Allows the RENDER_ATTACHMENT, blendable, multisampling and resolve capabilities on below GPUTextureFormats:

Allows the "read-only" or "write-only" GPUStorageTextureAccess on below GPUTextureFormats:

Enabling "texture-formats-tier2" at device creation will also enable "texture-formats-tier1".

Enabling "texture-formats-tier1" at device creation will also enable "rg11b10ufloat-renderable".

25.20. "texture-formats-tier2"

Allows the "read-write" GPUStorageTextureAccess on below GPUTextureFormats:

Enabling "texture-formats-tier2" at device creation will also enable "texture-formats-tier1".

25.21. "primitive-index"

Allows the use of primitive_index in WGSL.

This feature adds the following optional API surfaces:

26. Appendices

26.1. Texture Format Capabilities

26.1.1. Plain color formats

All supported plain color formats support usages COPY_SRC, COPY_DST, and TEXTURE_BINDING, and dimension "3d".

The RENDER_ATTACHMENT and STORAGE_BINDING columns specify support for GPUTextureUsage.RENDER_ATTACHMENT and GPUTextureUsage.STORAGE_BINDING usage respectively.

The render target pixel byte cost and render target component alignment are used to validate the maxColorAttachmentBytesPerSample limit.

Note: The texel block memory cost of each of these formats is the same as its texel block copy footprint.

Format Required Feature GPUTextureSampleType RENDER_ATTACHMENT blendable multisampling resolve STORAGE_BINDING Texel block copy footprint (Bytes) Render target pixel byte cost (Bytes)
"write-only" "read-only" "read-write"
8 bits per component (1-byte render target component alignment)
r8unorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 1
r8snorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 1
r8uint "uint" If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 1
r8sint "sint" If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 1
rg8unorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 2
rg8snorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 2
rg8uint "uint" If "texture-formats-tier1" is enabled 2
rg8sint "sint" If "texture-formats-tier1" is enabled 2
rgba8unorm "float",
"unfilterable-float"
If "texture-formats-tier2" is enabled 4 8
rgba8unorm-srgb "float",
"unfilterable-float"
4 8
rgba8snorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 4
rgba8uint "uint" If "texture-formats-tier2" is enabled 4
rgba8sint "sint" If "texture-formats-tier2" is enabled 4
bgra8unorm "float",
"unfilterable-float"
If "bgra8unorm-storage" is enabled 4 8
bgra8unorm-srgb "float",
"unfilterable-float"
4 8
16 bits per component (2-byte render target component alignment)
r16unorm "texture-formats-tier1" "unfilterable-float" 2
r16snorm "texture-formats-tier1" "unfilterable-float" 2
r16uint "uint" If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 2
r16sint "sint" If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 2
r16float "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled If "texture-formats-tier2" is enabled 2
rg16unorm "texture-formats-tier1" "unfilterable-float" 4
rg16snorm "texture-formats-tier1" "unfilterable-float" 4
rg16uint "uint" If "texture-formats-tier1" is enabled 4
rg16sint "sint" If "texture-formats-tier1" is enabled 4
rg16float "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 4
rgba16unorm "texture-formats-tier1" "unfilterable-float" 8
rgba16snorm "texture-formats-tier1" "unfilterable-float" 8
rgba16uint "uint" If "texture-formats-tier2" is enabled 8
rgba16sint "sint" If "texture-formats-tier2" is enabled 8
rgba16float "float",
"unfilterable-float"
If "texture-formats-tier2" is enabled 8
32 bits per component (4-byte render target component alignment)
r32uint "uint" 4
r32sint "sint" 4
r32float

"float" if "float32-filterable" is enabled

"unfilterable-float"

If "float32-blendable" is enabled 4
rg32uint "uint" 8
rg32sint "sint" 8
rg32float

"float" if "float32-filterable" is enabled

"unfilterable-float"

If "float32-blendable" is enabled 8
rgba32uint "uint" If "texture-formats-tier2" is enabled 16
rgba32sint "sint" If "texture-formats-tier2" is enabled 16
rgba32float

"float" if "float32-filterable" is enabled

"unfilterable-float"

If "float32-blendable" is enabled If "texture-formats-tier2" is enabled 16
mixed component width, 32 bits per texel (4-byte render target component alignment)
rgb10a2uint "uint" If "texture-formats-tier1" is enabled 4 8
rgb10a2unorm "float",
"unfilterable-float"
If "texture-formats-tier1" is enabled 4 8
rg11b10ufloat "float",
"unfilterable-float"
If "rg11b10ufloat-renderable" is enabled If "texture-formats-tier1" is enabled 4 8

26.1.2. Depth-stencil formats

A depth-or-stencil format is any format with depth and/or stencil aspects. A combined depth-stencil format is a depth-or-stencil format that has both depth and stencil aspects.

All depth-or-stencil formats support the COPY_SRC, COPY_DST, TEXTURE_BINDING, and RENDER_ATTACHMENT usages. All of these formats support multisampling. However, certain copy operations also restrict the source and destination formats, and none of these formats support textures with "3d" dimension.

Depth textures cannot be used with "filtering" samplers, but can always be used with "comparison" samplers even if they use filtering.

Format
NOTE:
Texel block memory cost (Bytes)
Aspect GPUTextureSampleType Valid texel copy source Valid texel copy destination Texel block copy footprint (Bytes) Aspect-specific format
stencil8 1 − 4 stencil "uint" 1 stencil8
depth16unorm 2 depth "depth", "unfilterable-float" 2 depth16unorm
depth24plus 4 depth "depth", "unfilterable-float" depth24plus
depth24plus-stencil8 4 − 8 depth "depth", "unfilterable-float" depth24plus
stencil "uint" 1 stencil8
depth32float 4 depth "depth", "unfilterable-float" 4 depth32float
depth32float-stencil8 5 − 8 depth "depth", "unfilterable-float" 4 depth32float
stencil "uint" 1 stencil8

24-bit depth refers to a 24-bit unsigned normalized depth format with a range from 0.0 to 1.0, which would be spelled "depth24unorm" if exposed.

26.1.2.1. Reading and Sampling Depth/Stencil Textures

It is possible to bind a depth-aspect GPUTextureView to either a texture_depth_* binding or a binding with other non-depth 2d/cube texture types.

A stencil-aspect GPUTextureView must be bound to a normal texture binding type. The sampleType in the GPUBindGroupLayout must be "uint".

Reading or sampling the depth or stencil aspect of a texture behaves as if the texture contains the values (V, X, X, X), where V is the actual depth or stencil value, and each X is an implementation-defined unspecified value.

For depth-aspect bindings, the unspecified values are not visible through bindings with texture_depth_* types.

If a depth texture is bound to tex with type texture_2d<f32>:

Note: Short of adding a new more constrained stencil sampler type (like depth), it’s infeasible for implementations to efficiently paper over the driver differences for depth/stencil reads. As this was not a portability pain point for WebGL, it’s not expected to be problematic in WebGPU. In practice, expect either (V, V, V, V) or (V, 0, 0, 1) (where V is the depth or stencil value), depending on hardware.

26.1.2.2. Copying Depth/Stencil Textures

The depth aspects of depth32float formats ("depth32float" and "depth32float-stencil8" have a limited range. As a result, copies into such textures are only valid from other textures of the same format.

The depth aspects of depth24plus formats ("depth24plus" and "depth24plus-stencil8") have opaque representations (implemented as either 24-bit depth or "depth32float"). As a result, depth-aspect texel copies are not allowed with these formats.

NOTE:
It is possible to imitate these disallowed copies:

26.1.3. Packed formats

All packed texture formats support COPY_SRC, COPY_DST, and TEXTURE_BINDING usages. All of these formats are filterable. None of these formats are renderable or support multisampling.

A compressed format is any format with a block size greater than 1×1.

Note: The texel block memory cost of each of these formats is the same as its texel block copy footprint.

Format Texel block copy footprint (Bytes) GPUTextureSampleType Texel block width/height "3d" Feature
rgb9e5ufloat 4 "float",
"unfilterable-float"
1 × 1
bc1-rgba-unorm 8 "float",
"unfilterable-float"
4 × 4 If "texture-compression-bc-sliced-3d" is enabled texture-compression-bc
bc1-rgba-unorm-srgb
bc2-rgba-unorm 16
bc2-rgba-unorm-srgb
bc3-rgba-unorm 16
bc3-rgba-unorm-srgb
bc4-r-unorm 8
bc4-r-snorm
bc5-rg-unorm 16
bc5-rg-snorm
bc6h-rgb-ufloat 16
bc6h-rgb-float
bc7-rgba-unorm 16
bc7-rgba-unorm-srgb
etc2-rgb8unorm 8 "float",
"unfilterable-float"
4 × 4 texture-compression-etc2
etc2-rgb8unorm-srgb
etc2-rgb8a1unorm 8
etc2-rgb8a1unorm-srgb
etc2-rgba8unorm 16
etc2-rgba8unorm-srgb
eac-r11unorm 8
eac-r11snorm
eac-rg11unorm 16
eac-rg11snorm
astc-4x4-unorm 16 "float",
"unfilterable-float"
4 × 4 If "texture-compression-astc-sliced-3d" is enabled texture-compression-astc
astc-4x4-unorm-srgb
astc-5x4-unorm 16 5 × 4
astc-5x4-unorm-srgb
astc-5x5-unorm 16 5 × 5
astc-5x5-unorm-srgb
astc-6x5-unorm 16 6 × 5
astc-6x5-unorm-srgb
astc-6x6-unorm 16 6 × 6
astc-6x6-unorm-srgb
astc-8x5-unorm 16 8 × 5
astc-8x5-unorm-srgb
astc-8x6-unorm 16 8 × 6
astc-8x6-unorm-srgb
astc-8x8-unorm 16 8 × 8
astc-8x8-unorm-srgb
astc-10x5-unorm 16 10 × 5
astc-10x5-unorm-srgb
astc-10x6-unorm 16 10 × 6
astc-10x6-unorm-srgb
astc-10x8-unorm 16 10 × 8
astc-10x8-unorm-srgb
astc-10x10-unorm 16 10 × 10
astc-10x10-unorm-srgb
astc-12x10-unorm 16 12 × 10
astc-12x10-unorm-srgb
astc-12x12-unorm 16 12 × 12
astc-12x12-unorm-srgb

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.es/ecma262/multipage/
[HR-TIME-3]
Yoav Weiss. High Resolution Time. 7 November 2024. WD. URL: https://www.w3.org/TR/hr-time-3/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[I18N-GLOSSARY]
Richard Ishida; Addison Phillips. Internationalization Glossary. 17 October 2024. NOTE. URL: https://www.w3.org/TR/i18n-glossary/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[WEBCODECS]
Paul Adenot; Eugene Zemtsov. WebCodecs. 8 July 2025. WD. URL: https://www.w3.org/TR/webcodecs/
[WEBGL-1]
Dean Jackson; Jeff Gilbert. WebGL Specification, Version 1.0. 9 August 2017. URL: https://www.khronos.org/registry/webgl/specs/latest/1.0/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
[WEBXR]
Brandon Jones; Manish Goregaokar; Rik Cabanier. WebXR Device API. 17 April 2025. CRD. URL: https://www.w3.org/TR/webxr/
[WGSL]
Alan Baker; Mehmet Oguz Derin; David Neto. WebGPU Shading Language. 9 August 2025. CRD. URL: https://www.w3.org/TR/WGSL/

Informative References

[MEDIAQUERIES-5]
Dean Jackson; et al. Media Queries Level 5. 18 December 2021. WD. URL: https://www.w3.org/TR/mediaqueries-5/
[SERVICE-WORKERS]
Yoshisato Yanagisawa; Monica CHINTALA. Service Workers. 6 March 2025. CRD. URL: https://www.w3.org/TR/service-workers/
[VULKAN]
The Khronos Vulkan Working Group. Vulkan 1.3. URL: https://registry.khronos.org/vulkan/specs/1.3/html/vkspec.html

IDL Index

interface mixin GPUObjectBase {
    attribute USVString label;
};

dictionary GPUObjectDescriptorBase {
    USVString label = "";
};

[Exposed=(Window, Worker), SecureContext]
interface GPUSupportedLimits {
    readonly attribute unsigned long maxTextureDimension1D;
    readonly attribute unsigned long maxTextureDimension2D;
    readonly attribute unsigned long maxTextureDimension3D;
    readonly attribute unsigned long maxTextureArrayLayers;
    readonly attribute unsigned long maxBindGroups;
    readonly attribute unsigned long maxBindGroupsPlusVertexBuffers;
    readonly attribute unsigned long maxBindingsPerBindGroup;
    readonly attribute unsigned long maxDynamicUniformBuffersPerPipelineLayout;
    readonly attribute unsigned long maxDynamicStorageBuffersPerPipelineLayout;
    readonly attribute unsigned long maxSampledTexturesPerShaderStage;
    readonly attribute unsigned long maxSamplersPerShaderStage;
    readonly attribute unsigned long maxStorageBuffersPerShaderStage;
    readonly attribute unsigned long maxStorageTexturesPerShaderStage;
    readonly attribute unsigned long maxUniformBuffersPerShaderStage;
    readonly attribute unsigned long long maxUniformBufferBindingSize;
    readonly attribute unsigned long long maxStorageBufferBindingSize;
    readonly attribute unsigned long minUniformBufferOffsetAlignment;
    readonly attribute unsigned long minStorageBufferOffsetAlignment;
    readonly attribute unsigned long maxVertexBuffers;
    readonly attribute unsigned long long maxBufferSize;
    readonly attribute unsigned long maxVertexAttributes;
    readonly attribute unsigned long maxVertexBufferArrayStride;
    readonly attribute unsigned long maxInterStageShaderVariables;
    readonly attribute unsigned long maxColorAttachments;
    readonly attribute unsigned long maxColorAttachmentBytesPerSample;
    readonly attribute unsigned long maxComputeWorkgroupStorageSize;
    readonly attribute unsigned long maxComputeInvocationsPerWorkgroup;
    readonly attribute unsigned long maxComputeWorkgroupSizeX;
    readonly attribute unsigned long maxComputeWorkgroupSizeY;
    readonly attribute unsigned long maxComputeWorkgroupSizeZ;
    readonly attribute unsigned long maxComputeWorkgroupsPerDimension;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUSupportedFeatures {
    readonly setlike<DOMString>;
};

[Exposed=(Window, Worker), SecureContext]
interface WGSLLanguageFeatures {
    readonly setlike<DOMString>;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUAdapterInfo {
    readonly attribute DOMString vendor;
    readonly attribute DOMString architecture;
    readonly attribute DOMString device;
    readonly attribute DOMString description;
    readonly attribute unsigned long subgroupMinSize;
    readonly attribute unsigned long subgroupMaxSize;
    readonly attribute boolean isFallbackAdapter;
};

interface mixin NavigatorGPU {
    [SameObject, SecureContext] readonly attribute GPU gpu;
};
Navigator includes NavigatorGPU;
WorkerNavigator includes NavigatorGPU;

[Exposed=(Window, Worker), SecureContext]
interface GPU {
    Promise<GPUAdapter?> requestAdapter(optional GPURequestAdapterOptions options = {});
    GPUTextureFormat getPreferredCanvasFormat();
    [SameObject] readonly attribute WGSLLanguageFeatures wgslLanguageFeatures;
};

dictionary GPURequestAdapterOptions {
    DOMString featureLevel = "core";
    GPUPowerPreference powerPreference;
    boolean forceFallbackAdapter = false;
    boolean xrCompatible = false;
};

enum GPUPowerPreference {
    "low-power",
    "high-performance",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUAdapter {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    [SameObject] readonly attribute GPUAdapterInfo info;

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

dictionary GPUDeviceDescriptor
         : GPUObjectDescriptorBase {
    sequence<GPUFeatureName> requiredFeatures = [];
    record<DOMString, (GPUSize64 or undefined)> requiredLimits = {};
    GPUQueueDescriptor defaultQueue = {};
};

enum GPUFeatureName {
    "core-features-and-limits",
    "depth-clip-control",
    "depth32float-stencil8",
    "texture-compression-bc",
    "texture-compression-bc-sliced-3d",
    "texture-compression-etc2",
    "texture-compression-astc",
    "texture-compression-astc-sliced-3d",
    "timestamp-query",
    "indirect-first-instance",
    "shader-f16",
    "rg11b10ufloat-renderable",
    "bgra8unorm-storage",
    "float32-filterable",
    "float32-blendable",
    "clip-distances",
    "dual-source-blending",
    "subgroups",
    "texture-formats-tier1",
    "texture-formats-tier2",
    "primitive-index",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    [SameObject] readonly attribute GPUAdapterInfo adapterInfo;

    [SameObject] readonly attribute GPUQueue queue;

    undefined destroy();

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});
    GPUExternalTexture importExternalTexture(GPUExternalTextureDescriptor descriptor);

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);
    Promise<GPUComputePipeline> createComputePipelineAsync(GPUComputePipelineDescriptor descriptor);
    Promise<GPURenderPipeline> createRenderPipelineAsync(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);

    GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

[Exposed=(Window, Worker), SecureContext]
interface GPUBuffer {
    readonly attribute GPUSize64Out size;
    readonly attribute GPUFlagsConstant usage;

    readonly attribute GPUBufferMapState mapState;

    Promise<undefined> mapAsync(GPUMapModeFlags mode, optional GPUSize64 offset = 0, optional GPUSize64 size);
    ArrayBuffer getMappedRange(optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined unmap();

    undefined destroy();
};
GPUBuffer includes GPUObjectBase;

enum GPUBufferMapState {
    "unmapped",
    "pending",
    "mapped",
};

dictionary GPUBufferDescriptor
         : GPUObjectDescriptorBase {
    required GPUSize64 size;
    required GPUBufferUsageFlags usage;
    boolean mappedAtCreation = false;
};

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUBufferUsage {
    const GPUFlagsConstant MAP_READ      = 0x0001;
    const GPUFlagsConstant MAP_WRITE     = 0x0002;
    const GPUFlagsConstant COPY_SRC      = 0x0004;
    const GPUFlagsConstant COPY_DST      = 0x0008;
    const GPUFlagsConstant INDEX         = 0x0010;
    const GPUFlagsConstant VERTEX        = 0x0020;
    const GPUFlagsConstant UNIFORM       = 0x0040;
    const GPUFlagsConstant STORAGE       = 0x0080;
    const GPUFlagsConstant INDIRECT      = 0x0100;
    const GPUFlagsConstant QUERY_RESOLVE = 0x0200;
};

typedef [EnforceRange] unsigned long GPUMapModeFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUMapMode {
    const GPUFlagsConstant READ  = 0x0001;
    const GPUFlagsConstant WRITE = 0x0002;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    undefined destroy();

    readonly attribute GPUIntegerCoordinateOut width;
    readonly attribute GPUIntegerCoordinateOut height;
    readonly attribute GPUIntegerCoordinateOut depthOrArrayLayers;
    readonly attribute GPUIntegerCoordinateOut mipLevelCount;
    readonly attribute GPUSize32Out sampleCount;
    readonly attribute GPUTextureDimension dimension;
    readonly attribute GPUTextureFormat format;
    readonly attribute GPUFlagsConstant usage;
};
GPUTexture includes GPUObjectBase;

dictionary GPUTextureDescriptor
         : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
    sequence<GPUTextureFormat> viewFormats = [];
};

enum GPUTextureDimension {
    "1d",
    "2d",
    "3d",
};

typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUTextureUsage {
    const GPUFlagsConstant COPY_SRC          = 0x01;
    const GPUFlagsConstant COPY_DST          = 0x02;
    const GPUFlagsConstant TEXTURE_BINDING   = 0x04;
    const GPUFlagsConstant STORAGE_BINDING   = 0x08;
    const GPUFlagsConstant RENDER_ATTACHMENT = 0x10;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

dictionary GPUTextureViewDescriptor
         : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureUsageFlags usage = 0;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount;
};

enum GPUTextureViewDimension {
    "1d",
    "2d",
    "2d-array",
    "cube",
    "cube-array",
    "3d",
};

enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only",
};

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16unorm",
    "r16snorm",
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16unorm",
    "rg16snorm",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb9e5ufloat",
    "rgb10a2uint",
    "rgb10a2unorm",
    "rg11b10ufloat",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16unorm",
    "rgba16snorm",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth/stencil formats
    "stencil8",
    "depth16unorm",
    "depth24plus",
    "depth24plus-stencil8",
    "depth32float",

    // "depth32float-stencil8" feature
    "depth32float-stencil8",

    // BC compressed formats usable if "texture-compression-bc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "bc1-rgba-unorm",
    "bc1-rgba-unorm-srgb",
    "bc2-rgba-unorm",
    "bc2-rgba-unorm-srgb",
    "bc3-rgba-unorm",
    "bc3-rgba-unorm-srgb",
    "bc4-r-unorm",
    "bc4-r-snorm",
    "bc5-rg-unorm",
    "bc5-rg-snorm",
    "bc6h-rgb-ufloat",
    "bc6h-rgb-float",
    "bc7-rgba-unorm",
    "bc7-rgba-unorm-srgb",

    // ETC2 compressed formats usable if "texture-compression-etc2" is both
    // supported by the device/user agent and enabled in requestDevice.
    "etc2-rgb8unorm",
    "etc2-rgb8unorm-srgb",
    "etc2-rgb8a1unorm",
    "etc2-rgb8a1unorm-srgb",
    "etc2-rgba8unorm",
    "etc2-rgba8unorm-srgb",
    "eac-r11unorm",
    "eac-r11snorm",
    "eac-rg11unorm",
    "eac-rg11snorm",

    // ASTC compressed formats usable if "texture-compression-astc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "astc-4x4-unorm",
    "astc-4x4-unorm-srgb",
    "astc-5x4-unorm",
    "astc-5x4-unorm-srgb",
    "astc-5x5-unorm",
    "astc-5x5-unorm-srgb",
    "astc-6x5-unorm",
    "astc-6x5-unorm-srgb",
    "astc-6x6-unorm",
    "astc-6x6-unorm-srgb",
    "astc-8x5-unorm",
    "astc-8x5-unorm-srgb",
    "astc-8x6-unorm",
    "astc-8x6-unorm-srgb",
    "astc-8x8-unorm",
    "astc-8x8-unorm-srgb",
    "astc-10x5-unorm",
    "astc-10x5-unorm-srgb",
    "astc-10x6-unorm",
    "astc-10x6-unorm-srgb",
    "astc-10x8-unorm",
    "astc-10x8-unorm-srgb",
    "astc-10x10-unorm",
    "astc-10x10-unorm-srgb",
    "astc-12x10-unorm",
    "astc-12x10-unorm-srgb",
    "astc-12x12-unorm",
    "astc-12x12-unorm-srgb",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUExternalTexture {
};
GPUExternalTexture includes GPUObjectBase;

dictionary GPUExternalTextureDescriptor
         : GPUObjectDescriptorBase {
    required (HTMLVideoElement or VideoFrame) source;
    PredefinedColorSpace colorSpace = "srgb";
};

[Exposed=(Window, Worker), SecureContext]
interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

dictionary GPUSamplerDescriptor
         : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUMipmapFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 32;
    GPUCompareFunction compare;
    [Clamp] unsigned short maxAnisotropy = 1;
};

enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat",
};

enum GPUFilterMode {
    "nearest",
    "linear",
};

enum GPUMipmapFilterMode {
    "nearest",
    "linear",
};

enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

dictionary GPUBindGroupLayoutDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutEntry> entries;
};

dictionary GPUBindGroupLayoutEntry {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;

    GPUBufferBindingLayout buffer;
    GPUSamplerBindingLayout sampler;
    GPUTextureBindingLayout texture;
    GPUStorageTextureBindingLayout storageTexture;
    GPUExternalTextureBindingLayout externalTexture;
};

typedef [EnforceRange] unsigned long GPUShaderStageFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUShaderStage {
    const GPUFlagsConstant VERTEX   = 0x1;
    const GPUFlagsConstant FRAGMENT = 0x2;
    const GPUFlagsConstant COMPUTE  = 0x4;
};

enum GPUBufferBindingType {
    "uniform",
    "storage",
    "read-only-storage",
};

dictionary GPUBufferBindingLayout {
    GPUBufferBindingType type = "uniform";
    boolean hasDynamicOffset = false;
    GPUSize64 minBindingSize = 0;
};

enum GPUSamplerBindingType {
    "filtering",
    "non-filtering",
    "comparison",
};

dictionary GPUSamplerBindingLayout {
    GPUSamplerBindingType type = "filtering";
};

enum GPUTextureSampleType {
    "float",
    "unfilterable-float",
    "depth",
    "sint",
    "uint",
};

dictionary GPUTextureBindingLayout {
    GPUTextureSampleType sampleType = "float";
    GPUTextureViewDimension viewDimension = "2d";
    boolean multisampled = false;
};

enum GPUStorageTextureAccess {
    "write-only",
    "read-only",
    "read-write",
};

dictionary GPUStorageTextureBindingLayout {
    GPUStorageTextureAccess access = "write-only";
    required GPUTextureFormat format;
    GPUTextureViewDimension viewDimension = "2d";
};

dictionary GPUExternalTextureBindingLayout {
};

[Exposed=(Window, Worker), SecureContext]
interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

dictionary GPUBindGroupDescriptor
         : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupEntry> entries;
};

typedef (GPUSampler or
         GPUTexture or
         GPUTextureView or
         GPUBuffer or
         GPUBufferBinding or
         GPUExternalTexture) GPUBindingResource;

dictionary GPUBindGroupEntry {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};

dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

dictionary GPUPipelineLayoutDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout?> bindGroupLayouts;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUShaderModule {
    Promise<GPUCompilationInfo> getCompilationInfo();
};
GPUShaderModule includes GPUObjectBase;

dictionary GPUShaderModuleDescriptor
         : GPUObjectDescriptorBase {
    required USVString code;
    sequence<GPUShaderModuleCompilationHint> compilationHints = [];
};

dictionary GPUShaderModuleCompilationHint {
    required USVString entryPoint;
    (GPUPipelineLayout or GPUAutoLayoutMode) layout;
};

enum GPUCompilationMessageType {
    "error",
    "warning",
    "info",
};

[Exposed=(Window, Worker), Serializable, SecureContext]
interface GPUCompilationMessage {
    readonly attribute DOMString message;
    readonly attribute GPUCompilationMessageType type;
    readonly attribute unsigned long long lineNum;
    readonly attribute unsigned long long linePos;
    readonly attribute unsigned long long offset;
    readonly attribute unsigned long long length;
};

[Exposed=(Window, Worker), Serializable, SecureContext]
interface GPUCompilationInfo {
    readonly attribute FrozenArray<GPUCompilationMessage> messages;
};

[Exposed=(Window, Worker), SecureContext, Serializable]
interface GPUPipelineError : DOMException {
    constructor(optional DOMString message = "", GPUPipelineErrorInit options);
    readonly attribute GPUPipelineErrorReason reason;
};

dictionary GPUPipelineErrorInit {
    required GPUPipelineErrorReason reason;
};

enum GPUPipelineErrorReason {
    "validation",
    "internal",
};

enum GPUAutoLayoutMode {
    "auto",
};

dictionary GPUPipelineDescriptorBase
         : GPUObjectDescriptorBase {
    required (GPUPipelineLayout or GPUAutoLayoutMode) layout;
};

interface mixin GPUPipelineBase {
    [NewObject] GPUBindGroupLayout getBindGroupLayout(unsigned long index);
};

dictionary GPUProgrammableStage {
    required GPUShaderModule module;
    USVString entryPoint;
    record<USVString, GPUPipelineConstantValue> constants = {};
};

typedef double GPUPipelineConstantValue; // May represent WGSL's bool, f32, i32, u32, and f16 if enabled.

[Exposed=(Window, Worker), SecureContext]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;
GPUComputePipeline includes GPUPipelineBase;

dictionary GPUComputePipelineDescriptor
         : GPUPipelineDescriptorBase {
    required GPUProgrammableStage compute;
};

[Exposed=(Window, Worker), SecureContext]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;
GPURenderPipeline includes GPUPipelineBase;

dictionary GPURenderPipelineDescriptor
         : GPUPipelineDescriptorBase {
    required GPUVertexState vertex;
    GPUPrimitiveState primitive = {};
    GPUDepthStencilState depthStencil;
    GPUMultisampleState multisample = {};
    GPUFragmentState fragment;
};

dictionary GPUPrimitiveState {
    GPUPrimitiveTopology topology = "triangle-list";
    GPUIndexFormat stripIndexFormat;
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    // Requires "depth-clip-control" feature.
    boolean unclippedDepth = false;
};

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip",
};

enum GPUFrontFace {
    "ccw",
    "cw",
};

enum GPUCullMode {
    "none",
    "front",
    "back",
};

dictionary GPUMultisampleState {
    GPUSize32 count = 1;
    GPUSampleMask mask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
};

dictionary GPUFragmentState
         : GPUProgrammableStage {
    required sequence<GPUColorTargetState?> targets;
};

dictionary GPUColorTargetState {
    required GPUTextureFormat format;

    GPUBlendState blend;
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};

dictionary GPUBlendState {
    required GPUBlendComponent color;
    required GPUBlendComponent alpha;
};

typedef [EnforceRange] unsigned long GPUColorWriteFlags;
[Exposed=(Window, Worker), SecureContext]
namespace GPUColorWrite {
    const GPUFlagsConstant RED   = 0x1;
    const GPUFlagsConstant GREEN = 0x2;
    const GPUFlagsConstant BLUE  = 0x4;
    const GPUFlagsConstant ALPHA = 0x8;
    const GPUFlagsConstant ALL   = 0xF;
};

dictionary GPUBlendComponent {
    GPUBlendOperation operation = "add";
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
};

enum GPUBlendFactor {
    "zero",
    "one",
    "src",
    "one-minus-src",
    "src-alpha",
    "one-minus-src-alpha",
    "dst",
    "one-minus-dst",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "constant",
    "one-minus-constant",
    "src1",
    "one-minus-src1",
    "src1-alpha",
    "one-minus-src1-alpha",
};

enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max",
};

dictionary GPUDepthStencilState {
    required GPUTextureFormat format;

    boolean depthWriteEnabled;
    GPUCompareFunction depthCompare;

    GPUStencilFaceState stencilFront = {};
    GPUStencilFaceState stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};

dictionary GPUStencilFaceState {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};

enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap",
};

enum GPUIndexFormat {
    "uint16",
    "uint32",
};

enum GPUVertexFormat {
    "uint8",
    "uint8x2",
    "uint8x4",
    "sint8",
    "sint8x2",
    "sint8x4",
    "unorm8",
    "unorm8x2",
    "unorm8x4",
    "snorm8",
    "snorm8x2",
    "snorm8x4",
    "uint16",
    "uint16x2",
    "uint16x4",
    "sint16",
    "sint16x2",
    "sint16x4",
    "unorm16",
    "unorm16x2",
    "unorm16x4",
    "snorm16",
    "snorm16x2",
    "snorm16x4",
    "float16",
    "float16x2",
    "float16x4",
    "float32",
    "float32x2",
    "float32x3",
    "float32x4",
    "uint32",
    "uint32x2",
    "uint32x3",
    "uint32x4",
    "sint32",
    "sint32x2",
    "sint32x3",
    "sint32x4",
    "unorm10-10-10-2",
    "unorm8x4-bgra",
};

enum GPUVertexStepMode {
    "vertex",
    "instance",
};

dictionary GPUVertexState
         : GPUProgrammableStage {
    sequence<GPUVertexBufferLayout?> buffers = [];
};

dictionary GPUVertexBufferLayout {
    required GPUSize64 arrayStride;
    GPUVertexStepMode stepMode = "vertex";
    required sequence<GPUVertexAttribute> attributes;
};

dictionary GPUVertexAttribute {
    required GPUVertexFormat format;
    required GPUSize64 offset;

    required GPUIndex32 shaderLocation;
};

dictionary GPUTexelCopyBufferLayout {
    GPUSize64 offset = 0;
    GPUSize32 bytesPerRow;
    GPUSize32 rowsPerImage;
};

dictionary GPUTexelCopyBufferInfo
         : GPUTexelCopyBufferLayout {
    required GPUBuffer buffer;
};

dictionary GPUTexelCopyTextureInfo {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUOrigin3D origin = {};
    GPUTextureAspect aspect = "all";
};

dictionary GPUCopyExternalImageDestInfo
         : GPUTexelCopyTextureInfo {
    PredefinedColorSpace colorSpace = "srgb";
    boolean premultipliedAlpha = false;
};

typedef (ImageBitmap or
         ImageData or
         HTMLImageElement or
         HTMLVideoElement or
         VideoFrame or
         HTMLCanvasElement or
         OffscreenCanvas) GPUCopyExternalImageSource;

dictionary GPUCopyExternalImageSourceInfo {
    required GPUCopyExternalImageSource source;
    GPUOrigin2D origin = {};
    boolean flipY = false;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

dictionary GPUCommandBufferDescriptor
         : GPUObjectDescriptorBase {
};

interface mixin GPUCommandsMixin {
};

[Exposed=(Window, Worker), SecureContext]
interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUBuffer destination,
        optional GPUSize64 size);
    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        optional GPUSize64 size);

    undefined copyBufferToTexture(
        GPUTexelCopyBufferInfo source,
        GPUTexelCopyTextureInfo destination,
        GPUExtent3D copySize);

    undefined copyTextureToBuffer(
        GPUTexelCopyTextureInfo source,
        GPUTexelCopyBufferInfo destination,
        GPUExtent3D copySize);

    undefined copyTextureToTexture(
        GPUTexelCopyTextureInfo source,
        GPUTexelCopyTextureInfo destination,
        GPUExtent3D copySize);

    undefined clearBuffer(
        GPUBuffer buffer,
        optional GPUSize64 offset = 0,
        optional GPUSize64 size);

    undefined resolveQuerySet(
        GPUQuerySet querySet,
        GPUSize32 firstQuery,
        GPUSize32 queryCount,
        GPUBuffer destination,
        GPUSize64 destinationOffset);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;
GPUCommandEncoder includes GPUCommandsMixin;
GPUCommandEncoder includes GPUDebugCommandsMixin;

dictionary GPUCommandEncoderDescriptor
         : GPUObjectDescriptorBase {
};

interface mixin GPUBindingCommandsMixin {
    undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
        optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,
        [AllowShared] Uint32Array dynamicOffsetsData,
        GPUSize64 dynamicOffsetsDataStart,
        GPUSize32 dynamicOffsetsDataLength);
};

interface mixin GPUDebugCommandsMixin {
    undefined pushDebugGroup(USVString groupLabel);
    undefined popDebugGroup();
    undefined insertDebugMarker(USVString markerLabel);
};

[Exposed=(Window, Worker), SecureContext]
interface GPUComputePassEncoder {
    undefined setPipeline(GPUComputePipeline pipeline);
    undefined dispatchWorkgroups(GPUSize32 workgroupCountX, optional GPUSize32 workgroupCountY = 1, optional GPUSize32 workgroupCountZ = 1);
    undefined dispatchWorkgroupsIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    undefined end();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUCommandsMixin;
GPUComputePassEncoder includes GPUDebugCommandsMixin;
GPUComputePassEncoder includes GPUBindingCommandsMixin;

dictionary GPUComputePassTimestampWrites {
    required GPUQuerySet querySet;
    GPUSize32 beginningOfPassWriteIndex;
    GPUSize32 endOfPassWriteIndex;
};

dictionary GPUComputePassDescriptor
         : GPUObjectDescriptorBase {
    GPUComputePassTimestampWrites timestampWrites;
};

[Exposed=(Window, Worker), SecureContext]
interface GPURenderPassEncoder {
    undefined setViewport(float x, float y,
        float width, float height,
        float minDepth, float maxDepth);

    undefined setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    undefined setBlendConstant(GPUColor color);
    undefined setStencilReference(GPUStencilValue reference);

    undefined beginOcclusionQuery(GPUSize32 queryIndex);
    undefined endOcclusionQuery();

    undefined executeBundles(sequence<GPURenderBundle> bundles);
    undefined end();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUCommandsMixin;
GPURenderPassEncoder includes GPUDebugCommandsMixin;
GPURenderPassEncoder includes GPUBindingCommandsMixin;
GPURenderPassEncoder includes GPURenderCommandsMixin;

dictionary GPURenderPassTimestampWrites {
    required GPUQuerySet querySet;
    GPUSize32 beginningOfPassWriteIndex;
    GPUSize32 endOfPassWriteIndex;
};

dictionary GPURenderPassDescriptor
         : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachment?> colorAttachments;
    GPURenderPassDepthStencilAttachment depthStencilAttachment;
    GPUQuerySet occlusionQuerySet;
    GPURenderPassTimestampWrites timestampWrites;
    GPUSize64 maxDrawCount = 50000000;
};

dictionary GPURenderPassColorAttachment {
    required (GPUTexture or GPUTextureView) view;
    GPUIntegerCoordinate depthSlice;
    (GPUTexture or GPUTextureView) resolveTarget;

    GPUColor clearValue;
    required GPULoadOp loadOp;
    required GPUStoreOp storeOp;
};

dictionary GPURenderPassDepthStencilAttachment {
    required (GPUTexture or GPUTextureView) view;

    float depthClearValue;
    GPULoadOp depthLoadOp;
    GPUStoreOp depthStoreOp;
    boolean depthReadOnly = false;

    GPUStencilValue stencilClearValue = 0;
    GPULoadOp stencilLoadOp;
    GPUStoreOp stencilStoreOp;
    boolean stencilReadOnly = false;
};

enum GPULoadOp {
    "load",
    "clear",
};

enum GPUStoreOp {
    "store",
    "discard",
};

dictionary GPURenderPassLayout
         : GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat?> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};

interface mixin GPURenderCommandsMixin {
    undefined setPipeline(GPURenderPipeline pipeline);

    undefined setIndexBuffer(GPUBuffer buffer, GPUIndexFormat indexFormat, optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined setVertexBuffer(GPUIndex32 slot, GPUBuffer? buffer, optional GPUSize64 offset = 0, optional GPUSize64 size);

    undefined draw(GPUSize32 vertexCount, optional GPUSize32 instanceCount = 1,
        optional GPUSize32 firstVertex = 0, optional GPUSize32 firstInstance = 0);
    undefined drawIndexed(GPUSize32 indexCount, optional GPUSize32 instanceCount = 1,
        optional GPUSize32 firstIndex = 0,
        optional GPUSignedOffset32 baseVertex = 0,
        optional GPUSize32 firstInstance = 0);

    undefined drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    undefined drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

[Exposed=(Window, Worker), SecureContext]
interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;

dictionary GPURenderBundleDescriptor
         : GPUObjectDescriptorBase {
};

[Exposed=(Window, Worker), SecureContext]
interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUCommandsMixin;
GPURenderBundleEncoder includes GPUDebugCommandsMixin;
GPURenderBundleEncoder includes GPUBindingCommandsMixin;
GPURenderBundleEncoder includes GPURenderCommandsMixin;

dictionary GPURenderBundleEncoderDescriptor
         : GPURenderPassLayout {
    boolean depthReadOnly = false;
    boolean stencilReadOnly = false;
};

dictionary GPUQueueDescriptor
         : GPUObjectDescriptorBase {
};

[Exposed=(Window, Worker), SecureContext]
interface GPUQueue {
    undefined submit(sequence<GPUCommandBuffer> commandBuffers);

    Promise<undefined> onSubmittedWorkDone();

    undefined writeBuffer(
        GPUBuffer buffer,
        GPUSize64 bufferOffset,
        AllowSharedBufferSource data,
        optional GPUSize64 dataOffset = 0,
        optional GPUSize64 size);

    undefined writeTexture(
        GPUTexelCopyTextureInfo destination,
        AllowSharedBufferSource data,
        GPUTexelCopyBufferLayout dataLayout,
        GPUExtent3D size);

    undefined copyExternalImageToTexture(
        GPUCopyExternalImageSourceInfo source,
        GPUCopyExternalImageDestInfo destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;

[Exposed=(Window, Worker), SecureContext]
interface GPUQuerySet {
    undefined destroy();

    readonly attribute GPUQueryType type;
    readonly attribute GPUSize32Out count;
};
GPUQuerySet includes GPUObjectBase;

dictionary GPUQuerySetDescriptor
         : GPUObjectDescriptorBase {
    required GPUQueryType type;
    required GPUSize32 count;
};

enum GPUQueryType {
    "occlusion",
    "timestamp",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUCanvasContext {
    readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;

    undefined configure(GPUCanvasConfiguration configuration);
    undefined unconfigure();

    GPUCanvasConfiguration? getConfiguration();
    GPUTexture getCurrentTexture();
};

enum GPUCanvasAlphaMode {
    "opaque",
    "premultiplied",
};

enum GPUCanvasToneMappingMode {
    "standard",
    "extended",
};

dictionary GPUCanvasToneMapping {
  GPUCanvasToneMappingMode mode = "standard";
};

dictionary GPUCanvasConfiguration {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.RENDER_ATTACHMENT
    sequence<GPUTextureFormat> viewFormats = [];
    PredefinedColorSpace colorSpace = "srgb";
    GPUCanvasToneMapping toneMapping = {};
    GPUCanvasAlphaMode alphaMode = "opaque";
};

enum GPUDeviceLostReason {
    "unknown",
    "destroyed",
};

[Exposed=(Window, Worker), SecureContext]
interface GPUDeviceLostInfo {
    readonly attribute GPUDeviceLostReason reason;
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUError {
    readonly attribute DOMString message;
};

[Exposed=(Window, Worker), SecureContext]
interface GPUValidationError
        : GPUError {
    constructor(DOMString message);
};

[Exposed=(Window, Worker), SecureContext]
interface GPUOutOfMemoryError
        : GPUError {
    constructor(DOMString message);
};

[Exposed=(Window, Worker), SecureContext]
interface GPUInternalError
        : GPUError {
    constructor(DOMString message);
};

enum GPUErrorFilter {
    "validation",
    "out-of-memory",
    "internal",
};

partial interface GPUDevice {
    undefined pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};

[Exposed=(Window, Worker), SecureContext]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    [SameObject] readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};

partial interface GPUDevice {
    attribute EventHandler onuncapturederror;
};

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

typedef unsigned long long GPUSize64Out;
typedef unsigned long GPUIntegerCoordinateOut;
typedef unsigned long GPUSize32Out;

typedef unsigned long GPUFlagsConstant;

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;

dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;

dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    GPUIntegerCoordinate height = 1;
    GPUIntegerCoordinate depthOrArrayLayers = 1;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;